Christian Uhle ist Philosoph und Autor, der sich mit Digitalisierungsthemen beschäftigt. In seinem kürzlich erschienenen Buch „Künstliche Intelligenz und echtes Leben: Philosophische Orientierung für eine gute Zukunft“ thematisiert er KI und deren Auswirkungen auf Fragen der Sinnfindung.
ARIC: Why do you think it’s important to talk about AI and meaning?
Christian Uhle: I have spent a lot of time looking into the question of meaning and have found that a common misconception is that the search for meaning is a private matter.
This perspective ignores the fact that meaning is very much created in a social and societal context. We are always dependent on external structures and contexts in order to perceive our lives as meaningful. For me, the question now is to what extent artificial intelligence changes these structures. AI has an impact on economic structures, on the way we live together, on medical developments, on scientific findings, on the basis of which new technologies are then developed, which in turn shape social value systems. This means that our lives and our society are changing profoundly at various levels. One question associated with this is whether this transformation will lead to us having better opportunities to perceive our lives as meaningful.
You quoted yourself, “Technology is the way we relate to the world.” How do we relate to the world through AI?
Let’s take the example of a blind person who, thanks to artificial intelligence, now has the opportunity to have the AI describe in detail what is around them with every step they take. This profoundly changes their own autonomy, but also their own relationship to the world. And it is the same when people allow AI to help shape their daily routines because they rely on an AI assistant. AI then has a direct impact on the way we shape our lives. These are just two examples of many.
“A key question is always whether artificial intelligence strengthens or weakens human autonomy. There is massive potential in both directions.”
Is it possible to generalize to what extent such apps for the visually impaired, for example, have an impact?
A key question is always whether artificial intelligence strengthens or weakens human autonomy. There is massive potential in both directions. In the example of this blind person, we have great potential to strengthen autonomy.
At the same time, there is a great danger that we will become more manipulable through artificial intelligence, which creates increasingly accurate and differentiated personality profiles that can then provide a basis for manipulating us in a certain direction. And there is also a danger because not thinking can be convenient. If we as a society begin to outsource more and more decisions, be they small or large, to an AI, then we must take great care not to weaken our ability to make decisions and think for ourselves in the process. This also involves political decisions, education policy or healthcare policy: where are which resources allocated? And also in my private life: What should I wear today? What is the right outfit for work or for a party? The question may be considered in isolation, but in the interplay of the many cases, the ability to make one’s own judgments and decisions can be weakened.
Is there a current use case that concerns you?
I think it’s the same with writing. Blog posts and social media posts are now increasingly being created with the help of AI. This is certainly not a problem in every case. But it is of course the case that formulating your own thoughts, taking the time to do so, even if it takes a little longer, helps to sharpen your own thoughts. It is part of the inner debate. And I think that even applies to a simple post in some respects.
For me as an author, it’s even more intense when I write books. I do a lot of research and structure before I write a chapter and I have a lot of notes and I generally know what I’m writing. But as with almost every person who writes, I always have individual thoughts and connections while I’m writing. This process affects the result and is part of philosophizing, it affects my own position and perspective. And if you now try to take a shortcut and jump straight to the result and leave out this process – couldn’t that be an element where autonomy is weakened?
“Now is the time to actively set the course and consciously shape development.”
In October of this year, it was announced that two pioneers of machine learning had been awarded the Nobel Prize. One of these Nobel Prize winners made a very drastic public statement, saying that he takes a critical view of current AI developments. How do you see all the hopes that were pinned on the topic of AI? Have they been confirmed or are they passé?
It is far too early to take stock. The bottom line is that we are only at the beginning of this development, even though research has been going on for decades. Now is the time to actively set the course and consciously shape development. What I don’t believe is that development will automatically move in a positive direction, so to speak. The tech industry in particular is constantly spreading economic optimism, but there may well be conflicts of interest between a company’s own interests and the interests of society as a whole.
A second optimism that is part of social modernity is the belief in technological progress that will raise us to a higher level of civilization and help us to live a better life. Technology has many advantages, but it can also have negative effects.
We now see, for example, that a side effect of the whole industrialization process is man-made climate change. This was neither known nor intended at the time. It is an unintended side effect of industrialization. And I think it’s the same now with AI. There is no guarantee of a positive development. If we want to end up in a wonderful future, then we have to actively, consciously and purposefully take it into our own hands.
Who is this “we” that has to take the future into its own hands?
This “we” has different levels of responsibility that should not be played off against each other. One level is the individual level, i.e. that you really think for yourself: How do you deal with it? Which technologies do you use and how do you integrate them into your own life? A second level of responsibility is the organizational level – the company, the institution, the school and so on. We should take the time to ask ourselves: How do we want to anchor AI in our organization? The third level is the social and political level. Of course, responsibility lies with all of us as citizens who are part of democratic discourse and processes, but also with politicians.
Who is typically interested in your work?
I have been giving talks on digitalization for many years. The interest is diverse, but the main interest comes from companies.
Does the tech bubble need more philosophy?
I am convinced: Yes. Because we are living in a time of far-reaching upheaval. This raises numerous fundamental questions. And fundamental questions are always philosophical questions too. In order for the interplay to succeed, as a philosopher you naturally have to ask yourself: How can I find a relevant language? Which examples do I choose?
What is your vision of a good or at least better and more meaningful life with AI?
Personally, I am a person who suffers from bureaucracy. It would give me more freedom if AI could take over the bureaucratic processes for me.
Do you mean, for example, writing invoices, tax returns and the comparable joys of self-employment?
Exactly, everything that goes with it.
Is that also your general vision? Reducing bureaucracy through AI?
I think it’s really exciting and I’m curious to see what happens. AI is a lever in so many fields. Let’s take history, for example. It’s not possible to train so many historians that they could read or analyze all the archives. That is impossible. But now, with AI, masses of sources can suddenly be evaluated and put into context. AI can not only read, but also really evaluate!
I therefore assume that, for example, the so-called progress, the new findings in the field of history, will take place at a much faster pace in the coming years and decades than before. Because it’s not just that historians suddenly have a new tool at their disposal, it’s as if they have a gigantic team of assistants sifting through the sources for them.
Why do you say “so-called progress”? Why the quotation marks at this point?
Because the concept of progress is often associated with technology. But true progress is when a society grows together, when there is more justice, when people live together in peace, in happiness, in healthy relationships without violence. That is real progress and not the question of whether we upgrade from HD to 4k displays.
There is a tool designed to help people who have experienced racist attacks. They can then chat about it with a bot based on ChatGPT. I was skeptical at first.
What is this bot supposed to do? Be empathetic?
Yes, exactly, that was the idea.
AI systems are getting better and better at simulating empathy. But this is and remains a simulation. They can give us the feeling of being understood, but we are not really understood. They can give us the feeling that we are experiencing meaning, but we are not experiencing real meaning. It’s a magic box that creates an illusion that can feel good to us. The whole thing has value if, for example, such conversations strengthen the people concerned in opening up and relating in interpersonal relationships.
Of course, there is now the perspective: Well, okay, but if it feels good for people. If it conveys the feeling of empathy, then it’s OK. It doesn’t matter whether it’s a human or an AI. Well, it just isn’t! At its core, the need to be understood is not the need for some kind of theater, but the need for real understanding. AI can never fulfill on a substantial, truthful and authentic level what we as humans desire.
The book “Artificial Intelligence and Real Life” by Christian Uhle was published by S. Fischer on October 9, 2024. In it, the author examines the promise of meaning in AI. To this end, he looks at many study results, which he coherently combines with philosophical considerations and theories. The ARIC team enjoyed reading this book and appreciated its topicality and clear argumentation.
How do you view online dating in this context, which you also discuss in your book?
In Germany, online dating is now the most common way for people to get to know each other and is therefore a key structural mechanism in our societies. Online dating and the design and algorithms of the platforms therefore very much determine who spends time with whom in this society, who goes on vacation with whom, who has sex with whom, who reproduces with whom, who shapes each other.
This is not necessarily problematic. We see, for example, that social mixing is higher among couples who have met online. The formation of filter bubbles is less pronounced than in the analog world. I would see that as positive.
But if people have not yet found anyone when they are searching and using these apps, negative psychological effects on the users can often be observed. This has also been done in various studies. This has a lot to do with the fact that the large number of alleged potential partners can can trigger excessive demands and also high competitive pressure. This in turn increases people’s fear of remaining single. Interestingly, intensive use of dating apps leads to pessimism rather than optimism.
A vision of the future often formulated by app manufacturers is that AI will determine who you see to a much greater extent than before. The joke about online dating is currently the quantity principle: it is impossible to look at and evaluate 400 potential partners in one day. It is possible online. There are now plans to switch from this quantity principle to a quality principle. AI will then not only determine who swipes whom to the right or left, but will also create personality profiles: Who writes with what choice of words and at what speed? The more personal data I give the AI access to, the better the personality profile that the AI creates of me will be and the more accurately it will potentially be able to predict whether I am a good match for a person or not. The very first surveys indicate that people are prepared to grant massive and comprehensive access to their data for the sake of love. The desire for love is very great, it is greater than the desire for data protection.
Is there something that feeds your AI optimism right now?
I know from people who are not native German speakers that they now put every email they write through AI before they send it and that this is a big step forward for them because they are perceived as more professional on the job market. There are ideas of professionalism that create winners and losers. This includes formulating an email without grammatical errors – regardless of the quality of its content. In my opinion, this is a positive example of AI actually closing inequalities. In many other areas, I see more of a danger that inequalities will be exacerbated.
“If, for example, more and more people meet and fall in love through an AI, then the structures of who is in a relationship with whom will become even more centralized”
In which areas are inequalities being exacerbated by AI?
Above all, of course, in the world of work. Here, the skills required of people are changing, which in turn leads to changes in winner-loser structures. But we should also take a look at areas that initially seem very private. If, for example, more and more people meet and fall in love through AI, then the structures of who is in a relationship with whom will become even more centralized and will be shaped by individual actors operating in the private sector. And that is definitely something that is problematic from a democratic perspective. Because these structures are not transparent. I don’t think that’s something that necessarily speaks against the use of the whole thing, but it has critical potential and we should therefore keep a close eye on it.
Interview: Sabrina Pohlmann
With our interviews, we want to introduce you to different perspectives and players in the field of AI. The positions of our interview partners do not necessarily reflect the positions of the ARIC.
Further interviews:



