Lindsey Clay
Feb 4, 2024

Is there an acceptable human cost of doing business?

We might not be able to fix the internet but we can do more to help online advertising – can’t we?

Mark Zuckerberg, chief executive of Meta, testifies before the Senate Judiciary Committee (©GettyImages)
Mark Zuckerberg, chief executive of Meta, testifies before the Senate Judiciary Committee (©GettyImages)

"Blood on your hands". That’s the chilling accusation made this week (31 January) against Mark Zuckerberg and other social media bosses at a hearing of the US Senate Judiciary Committee.

They were examining inadequate protection online for children – from enabling sexual predators to promoting unrealistic beauty standards.

That it has come to this, that such an accusation can even be made, supported by evidence, is astonishing.

The hearing followed another woeful incident online. Like most of you, I hope, I was horrified by the Taylor Swift nude deepfake scandal. The fact it can happen, the fact it can spread, and the fact it continued spreading even after it was discovered and denounced.

And, in this election year, we have every reason to fear a tidal wave of misleading deepfakes online attempting to warp political debate and outcomes. It’s ugly, it’s damaging, it’s dangerous. Some of it will hit society’s shores in advertising.

While I get that the zillion hours of user-generated content being uploaded for free to open platforms is very hard to pre-vet, advertising is different, more straightforward. We might not be able to fix the internet, but we could certainly do more to help online advertising – can’t we?

If human specialists were used to pre-clear all ads before they appeared, as they are in other media, then the scam, fake, illegal, harmful, or misleading ads that continue to see the light of day online would begin to evaporate.

People and businesses pay for advertising space. So why not charge more to cover the cost of rigorous clearance, make less profit, or don’t have an advertising-funded business model?

The automated ad reviewing systems using AI and machine learning that tech giants employ are impressive and clever beyond my comprehension. They catch a lot of the bad. But, as is frequently shown, they don’t catch all of it and there’s no suggestion they ever will.

So we have a choice. Advocate for a proper clearance system, like Clearcast, basically an upstream Advertising Standards Authority, or accept that platforms that choose automation are, in effect, allowed to show some illegal/scam/misleading ads. Just live with it being acceptable collateral damage.

I appreciate that proper ad clearance will impact on the business models and profits of companies that currently choose automation.

But, as the tech giants make significant profits, it wouldn’t bankrupt them to be more responsible. A cost to them; a boon to society and their reputations (and advertising’s reputation generally; we’re an industry suffering from an embarrassing deficit of trust).

And, to be blunt, cost shouldn’t be an issue anyway. Principles should cost something. If cost is an issue, then it suggests a (knowingly) flawed business model. No company has an innate right to make money while knowingly repeatedly causing social damage.

I know the argument against: they’ll say they do clear their ads. They invest considerably in AI and machine learning technologies to automate the review process. Human reviewers are also employed – lots of them – to handle complex cases. And they remove ads when they become aware they fall short of their standards.

Plus they’ll say there are just too many ads to manually process and it’s all happening in real time, allowing advertisers to tweak campaigns/creative. Too much is happening too quickly. Automation is the only answer.

If one of our industry’s goals is to eradicate harmful or illegal advertising then system changes have to happen upstream before any ads are seen. Removal can, by definition, only happen after some damage has been done.

How much collateral damage is acceptable in a business model? When do you accept a business model needs fixing? Where do you draw the line on what is or isn’t your responsibility as a business?

Automation benefits lots in life, the precision of robotic surgery in delicate procedures, for example. But when there is interpretation and nuance, potential criminality and social harm involved – and when money is changing hands – step forward the trained humans.

You can have a thorough ad clearance process or a convenient but flawed one; you can’t really have both.


Lindsey Clay is the chief executive of Thinkbox

Source:
Campaign UK

Related Articles

Just Published

2 days ago

Publicis climbs the highest in APAC media rankings ...

PHD retains the overall lead, as Omnicom Media Group sees an end-of-year boost from Tata Motors' win, and Publicis Media rockets to the sixth spot.

3 days ago

Netflix is going all out for Squid Game season ...

With a Golden Globe nomination secured even before its release, the record-breaking series returns on December 26, backed by Netflix’s boldest marketing push yet.