The change in tone and content between last year's SXSW and the current event has been quite striking.
Of course, there are the strong technology tracks—blockchain, for example, has its own for the first time this year—and there are 20 hours of presentations, panels and workshops on data privacy and more than twice that amount on artificial intelligence.
The difference this year is that, for the data privacy and AI topics, the vast majority of the content is not on the technology itself but on its human and societal impact.
That data privacy would be such a focus is less of a surprise after another year of security breaches and broken trust.
Just last month a Wall Street Journal investigation showed how 11 apps were sharing intensely personal information to Facebook even if a user had no connection to Facebook. One example investigated was Flo Health’s Flo Period and Ovulation Tracker, which claims more than 25 million active users and reported to Facebook when a user was having her period, and when the user had expressed an intention to get pregnant. It just shows how the price of "free" has never been so costly to our privacy.
This has been a real post-Cambridge Analytica and Facebook-sceptical SXSW.
In the case of AI, the shift to a discussion of the impact on humanity has been far more surprising—and welcome. AI has long played the antagonist in dystopian science fiction but at SXSW in the past it has been about efficiency, convenience, sustainability and even creativity. Not this year, with themes of job displacement, the coded bias problem and the ethical questions and risks of unintended consequences of AI all part of the SXSW agenda.
A good example of this was a panel on 9 March featuring Chris Urmson of Aurora Innovation and the author Malcolm Gladwell on self-driving cars. I joined the session with some of my Mercedes-Benz clients, expecting a conversation on the timeline towards level-five autonomous driving.
Instead, there was little discussion of the technology but a real focus on the wider impacts on society, with some fascinating hypotheses posed. What if self-driving cars reduce the 95% of road accidents that are due to human fault but, in doing so, they open us to larger-scale risks of fleets of cars being hacked and crashed? Is the productivity gain of less traffic and being able to otherwise occupy oneself in transit worth the loss of pleasure and sense of freedom from just going for a drive? What is the impact on society when the majority of journeys are shared with strangers? What are the ethical considerations for a self-driving car to act to preserve the safety of its occupants if that means harming someone else?
This shift in debate is becoming normal as innovation begins to mature. First there is the hype of the technology and the unreasonably positive view of the impact on the world (blockchain is in this phase). But before mainstreaming there is a phase of challenge, debate and reflection when the true impact of the coming technology adoption is becoming clearer and needs a critical light shined upon it.
In our marketing communications industry we aren’t yet asking these tougher questions on the growing role of AI and automation.
Right now, in the face of squeezed client spending, the application of automation has been primarily focused on the efficiency and achieving what clients want at the lowest possible cost. The automated work is, therefore, on transcreation, dynamic rendering of programmatic content and reporting.
But, as the role of AI and automation grows in the industry, we will have to assess and challenge almost exactly the same questions that the mobility industry is answering here at SXSW. What is our responsibility with regard to the massive job displacement that AI and automation will bring to our legacy agency operations and how are we building capability programmes to address this? What emerging biases are we building in our dynamic creativity algorithms that reflect averages and not the reality of diversity of gender, lifestyle and culture?
And—perhaps the biggest question of all—what are we doing to protect, defend and promote creativity? AI and automation are working to deconstruct creativity and somehow reverse-engineer it into words and sounds and pictures to be reassembled on demand for one-to-one personalisation at scale.
We know two truths about our business. The first is the unreasonable power of creativity to drive growth. And the second is that true creativity cannot result from the iteration of algorithms.
So, like at SXSW, where we are standing at a crossroads of creativity and technology, we must take a critical view of the long-term impact of automation and AI on our industry beyond the glimmer of the short-term benefits of making something a little cheaper, and balance this with where we are generating true value.
Justin Billingsley is global chief executive of Publicis Emil and chief executive of Publicis Groupe DACH and Brazil. He was formerly the Greater China chairman and CEO of Saatchi & Saatchi.