Jan Werum

Unlimited possibilities – deepfakes in marketing

This post has been translated by an AI and may contain translational inaccuracies.
Deepfakes halten verstärkt Einzug in Marketing und Medien. Jan Werum über neue Möglichkeiten und die Rolle des Journalismus.

A large banner adorns the unfinished “Elbtower” bankruptcy project in Hamburg. Here, the car rental company Sixt advertises cheap rental cars and writes “So you don’t run out of money”. The agency Jung von Matt is responsible for the viral campaign. It actually looks as if the oversized banner is attached to the shell of the former prestigious building. But the truth is that it is just a well-made deepfake.

Deepfakes are increasingly finding their way into advertising and marketing. Although deepfakes have always been possible thanks to Photoshop and the like, the hurdles have become significantly lower thanks to MidJourney, ChatGPT, Clipdrop and the like.

However, Sixt is by no means the only company to develop advertising campaigns with the help of AI tools. Deutsche Vermögensberatung also recently attracted attention with a clip showing its brand face Jürgen Klopp in various professions. These sequences never actually took place. Klopp’s face was simply placed in the situations using AI to create a deceptively real clip. The energy provider Tibber has also advertised with Angela Merkel’s face without having the former German Chancellor in front of the camera.

 

The progress of the generation tools

Thanks to various tools such as Open AI Sora, the possibilities of deepfakes for the advertising industry are almost unlimited. Agencies are aggressively promoting the new options. A message from the deceased founder on the company’s anniversary? No problem! All it takes is 1,000 to 2,500 euros and good images of the boss, who is no longer alive. More elaborate projects are advertised at up to 50,000 euros.

 

However, as with any technology, these options also depend on how they are used.

 

In fact, even private users today can create considerable deepfakes without a great deal of know-how. However, as with any technology, these possibilities also depend on how they are used. In January of this year alone, YouTube deleted over 1000 so-called scam deepfakes. However, the detection tools can barely keep pace with the progress of the generation tools. Sometimes what appears to be Jennifer Aniston advertises free MacBooks, sometimes what appears to be newsreader Christian Sievers advertises dubious financial products. 81% of Germans believe that they cannot recognize deepfakes.

 

Distinguishing the true from the false

Large editorial offices have now set up their own departments to detect such fakes. But the media’s resources – in times of declining circulation figures and sales – are limited. Providers such as Telekom are already presenting awareness campaigns and warning against malicious deepfakes.

Journalism itself has also been the source of deepfakes

 

But is all this new? Not really. It has always been difficult for the media and consumers to distinguish the true from the false. When Notre-Dame burned in Paris in 2019, the internet giants Google and YouTube had great difficulty assigning this news because they thought it was fake. The famous photo of the Reichstag fire, which repeatedly ends up on front pages, actually comes from the GDR feature film “The Vicious Circle”. When 180,000 people recently took to the streets to protest against right-wing extremism, members of the AFD posted photos that were supposed to prove that people were standing on the water of the Alster, in an attempt to cast doubt on the authenticity of the images. In fact, the photos were simply shot from a low angle, which is how this impression was created. However, they were not fakes, it was just a story created for political motives.

But journalism itself has also been the source of deepfakes. For example, the magazine “Die Aktuelle” apparently conducted an interview with Michael Schumacher in 2023. The problem was that the interview was AI-generated and poorly labeled as such. A short time later, it cost the editor-in-chief her job.

 

AI shifts the barrier to entry

 

These examples show that no artificial intelligence is needed to create deepfakes, because they are only one factor in the success of a deepfake. In my opinion, these are the main factors for the existence of malicious deepfakes:

  • Social media algorithms love attention and that favors well-crafted deepfakes.
  • The media have difficulty checking the sheer number of messages and posts for fakes.
  • Thanks to social media, everyone has become a sender and disseminator of news and not just a recipient, as was previously the case. This has changed the role of the gatekeeper.
  • Tools based on generative AI have drastically lowered the entry barrier for deepfakes.

 

To summarize, deepfakes and fake news have always existed in media and advertising, but are now circulating in unprecedented numbers. As with any technology, it depends on what the intentions are. Journalism should develop a system to better detect such posts. But it should pay particular attention to how it uses AI itself and how it wants to regulate itself in the future so as not to lose credibility.

It is to be welcomed that the focus is increasingly shifting and that not only risks are being examined, but also opportunities. The University of Leipzig recently launched a comprehensive research project on this topic.

We at ARIC e.V. stand for responsible AI, which means always carefully weighing up the opportunities and risks and, above all, raising awareness – because a third of the population has never heard of “deepfakes”.

 

 


Our work on deepfakes

The ARIC has just set up an internal working group to deal with deepfakes from an interdisciplinary perspective.

 


About the author

Jan Werum works as a communications expert at ARIC. In addition to AI topics, he deals with quantum technologies for HQIC. Previously, he worked for a startup in the field of autonomous robotics for a long time.

You can reach him by e-mail at: werum[at]aric-hamburg.de