The Global Alliance for the Responsible Media (GARM) is releasing guidelines to help the industry demonetise misinformation, control how media is placed around sensitive content, and establish brand safety requirements for the metaverse.
GARM revealed it would tackle these “critical challenges” during an event at the Cannes Lions Festival of Creativity on Tuesday.
The cross-industry initiative first released guidelines on harmful content in 2020, and is now expanding its definition to include misinformation. It first began working on adding misinformation as an additional content category in June 2021.
GARM defines misinformation as “the presence of verifiably false or willfully misleading content that is directly connected to user or societal harm.”
The alliance has previously said that creating consistent definitions for harmful content is “an imperative first step” toward tackling brand safety issues.
It is also introducing guidelines to demonetise misleading information online.
The danger of misinformation has been highlighted during the COVID-19 pandemic and the war in Ukraine, as false information has put lives at risk.
Tech platforms have been attempting to stop the spread of misinformation online through a combination of content moderators and software. But a recent report published by GARM in May revealed that there is still room for improvement in taking action on misinformation.
The misinformation guidelines were created in collaboration with the European Commission and non-governmental partner organisations such as Consumers International, Reporters without Borders, ADL and NAACP.
GARM is also releasing standards for managing ad placements relative to “safe but sensitive content” such as content related to death, injury or military conflict. The standards will cover placements within News Feeds, Stories, in-stream video, in-stream audio, and display overlays.
However, the standards do not currently cover placements within livestreams, which will require “further work” to define a minimum safety standard for monetisation, GARM said, due to the way in which the format is abused. A livestream of a recent mass shooting in the U.S. is just the latest example.
GARM also announced plans to publish brand safety principles for the metaverse, as members search for ethical advertising practices to implement in the emerging environment. The goal is to outline them before commercialisation starts in the metaverse.
Marc Pritchard, chief brand officer of Procter & Gamble, a founding member of GARM, said the initiative has “achieved much in a short space of time” but “more still needs to be done”.
“Broadening definitions to include misinformation, introducing adjacency standards and a proactive approach to monetising the metaverse are important next steps in ensuring that our brands can safely reach the diverse consumers we serve,” he said.
GARM was launched by the World Federation of Advertisers (WFA) during the Cannes festival in 2019 with the mission to demonetise harmful content such as hate speech, terrorism, bullying and misinformation, and establish standards on the use of personal data.
It was set up in the wake of a series of scandals related to how brands were placed next to illegal and extremist content on platforms such as Facebook and YouTube.
The organisation currently has 122 members, a mix of advertisers, agency holding companies, media platforms, ad tech companies and industry associations.