PARTNER CONTENT |
The rapid advancements of artificial intelligence (AI) have generated both excitement and concern about its potential ethical implications. AI has moved beyond supporting back-end operations and is now taking centre stage, enabling hyper-personalised advertising as well as predictive targeting and optimisation in advertising.
As the industry continues to realise the potential of AI, the need to properly balance the development of AI while preventing potential misuse of AI technology — through laws, codes of conduct, or self-regulation — are starting to emerge.
The EU takes the lead in attempts to regulate AI
Recognising this, the European Union (EU) has taken the lead by adopting the world's first AI legislation, the EU AI Act, which aims to strike a balance between enabling the development of AI in Europe while mitigating risks associated with AI systems and preserving EU fundamental values.
This is a significant step towards governing AI and establishing a robust AI regulatory system across the EU, which is estimated to take full effect in 2026. The act adopts a risk-based classification scheme to identify the level of threat posed to individuals or society.
The EU AI Act will promote transparency for limited-risk applications such as AI-enabled chatbots, emotion recognition, biometric categorisation, and deep fakes. However, more stringent rules apply to high-risk applications such as certain use cases of AI in the health or immigration sectors. In addition, certain AI applications for social scoring are completely banned due to fundamental rights concerns.
Implications on the advertising industry
Even though the EU AI Act is not intended to regulate the advertising industry and its services directly, as advertising is not on the list of high-risk AI systems, it serves as a timely reminder for marketers to be mindful of ethical AI practices. It offers a good basis and a common ground for the responsible development of AI and AI-based systems and sound data governance that the advertising ecosystem can leverage to generate positive changes within the industry.
Such changes include Generative AI service providers like ChatGPT or Midjourney now being required to disclose any copyrighted material leveraged in their AI development, including copyrighted material used in private algorithmic training. This ensures transparency and upholds the rights of original content creators. leveraged in their AI development, including copyrighted material used in private algorithmic training. This ensures transparency and upholds the rights of original content creators.
Companies may also be incentivised to reassess their utilisation of AI in targeting, building audience profiles, and decision-making processes, as proactively identifying and rectifying potential biases and misuses not only promotes fairer and more inclusive advertising experiences but also fosters increased trust with consumers. For instance, the EU AI Act's emphasis on responsible AI development could influence consumer preferences and advertising trends. Consumers might become more discerning about the types of advertising they engage with, favouring brands that prioritise transparency and ethical AI practices.
Ultimately, the goal of the ethical use of AI is to foster a more transparent, trustworthy, and consumer-centric advertising ecosystem, upholding consumer rights and promoting fair competition. Advertisers must adapt to these changes by embracing ethical AI practices that prioritise consumer well-being and respect consumer autonomy.
Shaping ethical AI standards in the advertising industry
While Southeast Asian countries prioritise driving innovation and economic growth through AI, fostering innovation without ethical frameworks can lead to potential risks and erode public trust. Instead of viewing the absence of stringent regulations as a challenge, companies in SEA can seize this opportunity and take the lead in shaping the region's ethical AI landscape.
Companies can consider forming a cross-functional team that will oversee ethical AI practices across all activities. Doing so can help draw diverse perspectives from within and outside the company, ensuring trust and continuous improvement through regular reviews and adjustments. Additionally, companies can actively contribute to industry-wide initiatives and research on AI ethics, fostering regional collaborations among SEA countries to collectively advance ethical standards while supporting innovation.
It is imperative that companies implement robust data governance and privacy-by-design practices such as pseudonymisation as well as security measures, as they are the foundational components of ethical AI. This cornerstone will enable businesses to leverage ethical AI tools responsibly for predicting consumer interests without compromising their privacy.
By integrating these practices into their operations, marketers can actively shape ethical AI standards, balancing regulatory compliance with opportunities for innovation. This holistic approach not only benefits the industry but also fosters public trust in AI technologies, unlocking new avenues for ethical AI advancement.
Learn more about how Criteo is leading Commerce Media and Retail Media space with its advanced AI Engine here.