Maximilian Kiener ist Prof. an der Universität Hamburg-Harburg.

Interview | The ethical view of AI

This post has been translated by an AI and may contain translational inaccuracies.
Maximilian Kiener ist Juniorprofessor an der TU Hamburg-Harburg, wo er das Institut für Ethik in der Technologie leitet. Zusammen mit seinem Team erforscht er Themen der KI-Ethik aus Sicht der Philosophie. Im Interview hat er seine Vorstellung von verantwortungsvoller KI mit uns geteilt und erklärt, welche Chancen die KI-Ethik auch für Unternehmer:innen bietet.

 

ARIC: You are a junior professor of technology ethics. How did you come to specialize in this topic?

Prof. Maximilian Kiener: I was trained in analytical philosophy and ethics. They have always had a great affinity with mathematics and the natural sciences. At the same time, I specialized in applied ethics at Oxford. As a result, dealing with intelligence and computation became an exciting topic for me.

These topics have a tremendous impact on our lives and raise ethical questions. In our time, AI is a disruptive development that offers an incredible number of opportunities, but also needs to be used with a sense of proportion.

 

What do you mean by responsible AI?

First of all, a practice of accountability. I do certain things. If that goes well or badly, I owe others answers – an accountability. With AI, this is an explosive point because the powerful systems are no longer easy to understand and explain. How do we hold each other accountable for opaque technologies?
Things can always go wrong. The fear is that the use of AI will create situations in which it is not at all clear who is responsible: If an autonomous car hits someone or a psychological robot paralyzes someone. We need to be able to attribute responsibility in order to deal with these cases of damage.

But the keyword Responsible AI is even broader. From my point of view, it involves proactively dealing with ethical standards. I don’t wait until things fall on my feet, but I think about it now: What data do I need to collect? How can I collect it to validate that it is fair and doesn’t just benefit certain groups of people? This forward-thinking is very important. And then, in a second step, that an accountability practice remains in place. That companies also recognize this: If they develop and use this technology, they owe others an answer as to how they do it and that this is embedded in a social discourse. I believe that these two components are the core of Responsible AI.

 

“How can we ensure that this extremely powerful technology remains understandable for us?”

 

Are there any positive examples or best practices that you can share?

One anchor point is the EU AI Act, which strengthens the right to declaration. This is a very positive development. However, the fear is that this remains a political promise that cannot really be fulfilled technologically.

In medicine, this is a big deal: when AI is used in diagnostics, how this is communicated to patients when working with doctors. How can we ensure that this extremely powerful technology remains understandable to us? And what is a good explanation?

 

“I perceive an excessive demand in the economy.”

 

Do you have the feeling that academic voices are also heard in practice? Or is this an, perhaps unwanted, ivory tower?

I perceive an overload in the economy. Companies are exposed to many regulations. These are often abstract. Small and medium-sized companies find it difficult to be specific: What does this mean for me in practice? Regulations often do not provide concrete instructions on how to implement something. For many companies, high-risk AI applications are not even relevant.

My impression is that a serious interest in ethics is gaining ground. On the one hand, I believe that ethical behavior can create concrete economic added value. It helps to set the course at an early stage in order to avert problems later on. Ethics is a kind of proactive legal compliance. Sometimes legal terms are vague – ethics can help us to concretize them and adapt them to our corporate values. It can provide orientation.

Ethics can also create added value for companies by helping to prioritize resources. Often it is not clear: where should we put our money and human resources because there are so many risks. An ethical assessment can help identify the most serious situations and deploy resources there.

Ultimately, however, it is also a fundamental, existential question of character for companies: What do I actually want as a company? Do I want the world to be a little better with my products? Or do I focus on financial things?

My hope is that entrepreneurship will go beyond the financial aspects and show a vision of design that includes ethics. And I see this as a claim to excellence: bringing excellent products or services to the market also means taking ethics seriously.

 

“How can we couple AI with our human intelligence to create a hybrid intelligence”

 

Which cases are particularly relevant in practice?

The medical field is very exciting. It is the area in which AI can already significantly outperform humans: An AI system can already be better at diagnostics than a human radiologist. An AI system can develop drugs faster than a team of researchers. How do we deal with the situation when AI outperforms experienced experts and we can no longer understand this topic in detail? We have a great benefit here, but also many challenges.

Also all things involving interaction between humans and AI. Large language models could become even more personalized in the coming years. Imagine a ChatGPT that is more enriched with specialized knowledge and highly personalized and then acquires the ability to plan over longer periods of time. Many questions will arise in this area. This will have a major impact on the optimization of processes in companies, in information retrieval, in increasing the efficiency of processes, in customer service, but also in innovation design. How can we combine AI with our human intelligence to create a hybrid intelligence that gives us a boost?

 

How dangerous is AI for our democracy, for example in election campaigns?

AI in social media can promote the dissemination of popular content. But popularity is not correctness. It also harbours the risk of polarization. And AI opens up the possibility of highly personalized influence. We all know this from Amazon “You might also like this article”. But it’s the same with political influence. You could create psychometric profiles and influence the voting behavior of users.

On the other hand, I also think that AI can make our democracy better and move us forward as a society. On the website pol.is, AI was used to structure a complex debate and pick the people most likely to make progress in the debate. AI can therefore also help us to bring new order into a complex public debate culture, in which we are exposed to a huge flood of information, and to talk to each other better.

But it goes both ways. AI is like the Roman god Janus with two faces: it can go backwards or forwards.

 

Let’s assume we find ourselves in a positive future ten years from now, where AI is used for good. What might that look like?

AI is always good where we have a cognitive broadband problem and cannot process the information and there are simply too many things. There is an interesting researcher, Cesar Hidalgo. He thinks AI could help us to draw up legislative proposals in avatars. We would then decide on them. But the complex processing of information could be automated in such a way that it would free up our mental resources for reflection. We would keep the important decisions. The positive version would be that this works. Whether this is realistic remains to be seen.

Interview: Sabrina Pohlmann

 


With our interviews, we want to introduce you to different perspectives and players in the field of AI. The positions of our interview partners do not necessarily reflect the positions of the ARIC.


 

Further interviews: