Jessica Goodfellow
Apr 9, 2020

Coronavirus misinformation slipping through Facebook's ad review system

An ad encouraging users to drink bleach was among those approved to run in a new investigation that exposes the dangerous flaws in Facebook's automated ad review system.

These ads were approved by Facebook's ad review system.
These ads were approved by Facebook's ad review system.

Facebook is approving ads to run that contain deliberate and dangerous misinformation relating to COVID-19, according to an investigation by US nonprofit Consumer Reports.

The organisation created seven paid ads containing varying degrees of coronavirus-related misinformation to test whether Facebook's systems would flag them prior to running—but all got approved.

In order to do this, it set up a fake Facebook account and a page for a made-up organisation, called the "Self Preservation Society", through which it created the ads. The ads all featured content that Facebook has banned over the past few months, including “claims that are designed to discourage treatment or taking appropriate precautions” and “false cures”.

The ads ranged from subtle to blatant misinformation. But even the most flagrant ones, that contained claims like “Coronavirus is a HOAX", that social distancing "doesn’t make any difference AT ALL", and that people can “stay healthy with SMALL daily doses of bleach", were approved. Consumer Reports pulled the ads before they were able to run.

Facebook confirmed that all seven ads created by Consumer Reports violated its policies.

In a statement, a Facebook spokesperson said: “While we’ve removed millions of ads and commerce listings for violating our policies related to COVID-19, we’re always working to improve our enforcement systems to prevent harmful misinformation related to this emergency from spreading on our services.”

Facebook's systems can review ads before, during or after an ad completes its set. Facebook's ad review system relies primarily on automated review—human reviewers are used to improve and train its automated systems, and in some cases, review specific ads.

It is likely the seven ads that were created would eventually have been found and removed. But they could have already reaped damage among users by this time. It is particularly concerning when considering Facebook's reach: the parent company operates four of the top six social networks in the world.

With its vast reach, Facebook has been under a lot of pressure over the past few months to stem the spread of misinformation related to COVID-19 on its platforms. But it is attempting to do this with fewer staff, after sending all its content reviewers home for their safety.

Facebook did flag last month that due to it having a reduced and remote workforce, and therefore relying more on automated technology, this may lead the platform to "make mistakes", specifically to an increase in ads being "incorrectly disapproved". It did not flag the possibility of the opposite of this— ads being incorrectly approved—and the much more dangerous conotations of this.

Source:
Campaign Asia
Tags

Related Articles

Just Published

2 days ago

40 Under 40 2024: Swyn Evans, Zeno

Evans has demonstrated dynamic leadership as managing director at Zeno Singapore, driving client wins, revenue growth, and championing team welfare and women’s advancement in just eight months.

2 days ago

Happy Lunar New Year from Campaign Asia-Pacific

The editorial team is slithering away for a short break, but we'll be back with our newsletters and ready to charm on January 31st.

2 days ago

'Fear doesn't build trust': Cisco's CMO on why ...

CMO Carrie Palin reveals why consumer trust, impact-readiness surrounding AI, and in-person connection might be the keys to sustain the company’s future.

2 days ago

Stand guard: Protecting your brand from the hidden ...

The traditional reactive approach to risk management is grossly inadequate for the age of AI-powered marketing, says Mediabrands Australia’s Geoff Clarke.