Projektvorstellung – Im ARIC hat sich ein neuer interner Arbeitskreis formiert, der sich mit den Zusammenhängen zwischen Künstlicher Intelligenz und Meinungsbildung/Propaganda beschäftigt und beispielsweise zu Deep Fakes informiert. Das Thema ist besonders in Bezug auf das Superwahljahr 2024 wichtig. Aktuell wird gerade ein Essay zum Thema verfasst.
April 14, 2024: With advances in generative artificial intelligence (GenAI), the risks posed by deepfakes in terms of propaganda and disinformation are increasingly coming into focus. This technology can be used to create convincing images and videos. This development represents a major challenge, as the media have always had a strong influence on opinion formation. Changes in the media should therefore be given special attention, as they have an impact on the reception and processing of information.
“It is (…) insufficient to limit the discussion about AI-related dangers for propaganda exclusively to deepfakes.”
This year’s state parliamentary, EU parliamentary and US presidential elections provide an insight into the influence of generative AI on political processes. However, it is insufficient to limit the discussion about AI-related dangers to propaganda to deepfakes alone. Although the generation of deceptively real videos and images entails considerable risks, the possibility of instrumentalizing images for propagandistic content has existed for a long time.
The influence of AI, which is not directly visible on a visual level, is also of key importance. These are mechanisms of action at an individual level, but they harbor great potential dangers at a societal level.
“Competition for users’ attention and optimization through AI are further exacerbating the situation”
As ARIC, we strive to raise awareness of the hidden impacts of AI on society and develop strategies to combat them. The focus is on platforms whose user experience is controlled by AI. Since the shift of information dissemination to the internet, algorithms increasingly control what users see and what they do not see (Thaler & Sunstein, 2008). This is particularly evident in social networks, whose business models are based on data collection and targeted advertising through user tracking and predictive analytics (Russel, 2019; Zuboff, 2019). Advanced AI systems optimize the user experience to maximize engagement both in the present and in the future. By maximizing user engagement with the help of advanced technologies, psychological vulnerabilities are specifically exploited (Zuboff, 2019, p. 288; Yeung, 2019). The tendency of humans to absorb information unconsciously, regardless of conscious beliefs, opens up opportunities for manipulation through emotional incentives (Kramer, 2014). Algorithms reinforce this dynamic by preferentially recommending content that reinforces existing beliefs (Russel, 2019), leading to the emergence of echo chambers and a rise in radical political views (Garimella, 2018; Sunstein, 2018). Competition for users’ attention and optimization through AI further exacerbate the situation by creating an increasingly heated and polarized information landscape.
“However, the influence of fake content, which cannot be distinguished from genuine content, represents an additional dimension in terms of the possibilities for manipulating opinion formation.”
In societies where polarizing opinions coexist, deepfakes perform similar functions to previous information practices by further reinforcing polarization and division. However, the influence of fake content, which is indistinguishable from genuine content, adds an additional dimension in terms of the potential to manipulate opinion. Given these challenges, our aim is to raise awareness of the diverse influences of AI on society and to develop effective strategies to combat them.
The ARIC is currently working on a position paper in an interdisciplinary working group, which shows the mechanisms of action of AI algorithms outlined above in conjunction with propaganda methods and illustrates the influence on opinion formation at an individual and societal level.
We want to raise society’s awareness of this important topic and encourage critical self-reflection in this regard. In future, ARIC will offer educational and awareness-raising formats on this topic to support this process.
Contact persons
These contact persons are available for suggestions, inquiries and networking:

Graydon Pawlik
(Global, Social & Political Thought)
gp[at]aric-hamburg.de

Louisa Rockstedt
(Cultural Studies, Digital Media)
rockstedt[at]aric-hamburg.de
Participants in the working group
- Anselm Fehnker (Computer Science)
- Graydon Pawlik (Global, Social & Political Thought)
- Sabrina Pohlmann (Sociology)
- Louisa Rockstedt (Cultural Studies, Digital Media)
- Dr. Natalie Rotermund (Neurosciences)
- Nicolas Schulz (Psychology, Artificial Intelligence)
- Jan Werum (Communication Sciences, Journalism)
Sources
- Garimella, K., et al. (2018) “Political discourse on social media: Echo chambers, gatekeepers, and the price of bipartisanship,” in Proceedings of the 2018 World Wide Web Conference, Association for Computing Machinery, pp. 913 – 922.
- Kramer, A. D.I., Guillory, J. E., Hancock, J. T. (2014) ‘Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111, 8788-90.
- Russell, S. (2019) Human Compatible: Artificial Intelligence and the Problem of Control, Viking.
- Sunstein, C. (2018) #Republic: Divided Democracy in the Age of Social Media,’ Princeton university Press.
- Thaler, R. & Sunstein, C. (2008) Nudge, London: Penguin Books.
- Yeung, K. (2012) “Nudge as fudge,” The Modern Law Review, 75(1), pp. 122-148.
- Zuboff, S. (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power,’ Profile Books.


