Lindsey Clay
Feb 4, 2024

Is there an acceptable human cost of doing business?

We might not be able to fix the internet but we can do more to help online advertising – can’t we?

Mark Zuckerberg, chief executive of Meta, testifies before the Senate Judiciary Committee (©GettyImages)
Mark Zuckerberg, chief executive of Meta, testifies before the Senate Judiciary Committee (©GettyImages)

"Blood on your hands". That’s the chilling accusation made this week (31 January) against Mark Zuckerberg and other social media bosses at a hearing of the US Senate Judiciary Committee.

They were examining inadequate protection online for children – from enabling sexual predators to promoting unrealistic beauty standards.

That it has come to this, that such an accusation can even be made, supported by evidence, is astonishing.

The hearing followed another woeful incident online. Like most of you, I hope, I was horrified by the Taylor Swift nude deepfake scandal. The fact it can happen, the fact it can spread, and the fact it continued spreading even after it was discovered and denounced.

And, in this election year, we have every reason to fear a tidal wave of misleading deepfakes online attempting to warp political debate and outcomes. It’s ugly, it’s damaging, it’s dangerous. Some of it will hit society’s shores in advertising.

While I get that the zillion hours of user-generated content being uploaded for free to open platforms is very hard to pre-vet, advertising is different, more straightforward. We might not be able to fix the internet, but we could certainly do more to help online advertising – can’t we?

If human specialists were used to pre-clear all ads before they appeared, as they are in other media, then the scam, fake, illegal, harmful, or misleading ads that continue to see the light of day online would begin to evaporate.

People and businesses pay for advertising space. So why not charge more to cover the cost of rigorous clearance, make less profit, or don’t have an advertising-funded business model?

The automated ad reviewing systems using AI and machine learning that tech giants employ are impressive and clever beyond my comprehension. They catch a lot of the bad. But, as is frequently shown, they don’t catch all of it and there’s no suggestion they ever will.

So we have a choice. Advocate for a proper clearance system, like Clearcast, basically an upstream Advertising Standards Authority, or accept that platforms that choose automation are, in effect, allowed to show some illegal/scam/misleading ads. Just live with it being acceptable collateral damage.

I appreciate that proper ad clearance will impact on the business models and profits of companies that currently choose automation.

But, as the tech giants make significant profits, it wouldn’t bankrupt them to be more responsible. A cost to them; a boon to society and their reputations (and advertising’s reputation generally; we’re an industry suffering from an embarrassing deficit of trust).

And, to be blunt, cost shouldn’t be an issue anyway. Principles should cost something. If cost is an issue, then it suggests a (knowingly) flawed business model. No company has an innate right to make money while knowingly repeatedly causing social damage.

I know the argument against: they’ll say they do clear their ads. They invest considerably in AI and machine learning technologies to automate the review process. Human reviewers are also employed – lots of them – to handle complex cases. And they remove ads when they become aware they fall short of their standards.

Plus they’ll say there are just too many ads to manually process and it’s all happening in real time, allowing advertisers to tweak campaigns/creative. Too much is happening too quickly. Automation is the only answer.

If one of our industry’s goals is to eradicate harmful or illegal advertising then system changes have to happen upstream before any ads are seen. Removal can, by definition, only happen after some damage has been done.

How much collateral damage is acceptable in a business model? When do you accept a business model needs fixing? Where do you draw the line on what is or isn’t your responsibility as a business?

Automation benefits lots in life, the precision of robotic surgery in delicate procedures, for example. But when there is interpretation and nuance, potential criminality and social harm involved – and when money is changing hands – step forward the trained humans.

You can have a thorough ad clearance process or a convenient but flawed one; you can’t really have both.


Lindsey Clay is the chief executive of Thinkbox

Source:
Campaign UK

Related Articles

Just Published

20 hours ago

How creativity can enhance customer experiences

Forget products, people buy experiences. Raman Minhas explores how creative customer experiences, from AR trials to interactive games, are transforming brand loyalty.

20 hours ago

Everything you need to know ahead of Google’s ...

With the fate of the digital advertising industry hanging in the balance, understanding its background and potential consequences is essential.

21 hours ago

Summerween: Why Michaels, Home Depot and General ...

It’s not just you. Everyone is noticing a lot of orange and spooky stuff in stores while it’s still summer. Here’s why.

21 hours ago

Report: X brings on Targeted Victory for comms support

X taps Republican consulting firm Targeted Victory, owned by Stagwell Group, to manage the fallout from its suspension in Brazil, according to Wired.