Why a chatbot does not mean what it says and why optimization might lead to unethical behavior
May 24th, 2pm, online and in english
In this workshop, we will focus on some of the most relevant ethical paradigms currently discussed in moral philosophy. The idea here is to find a paradigm that best describes the functionality of a state of the art machine learning system. As a matter of fact, the ethical paradigm found in the machine learning system contradicts with some of the key concepts of western democracies. This observation will serve us as the starting point to focus on the value alignment problem. In the second part of the workshop, we will focus on some problems concerning responsibility in the case of chatbots and large language models in general and how to address the problem of fake news in texts lacking any from of intentionality by an actual speaker.
About the speaker
Antonio Bikić is a postdoctoral researcher at the LMU Munich and the Fraunhofer-Intitute for Cognitive Systems, working for a project on ethically safe AI. He finished his PhD in philosophy (LMU Munich & ETH Zurich) in 2022 on semantics and morality in the context of reinforcement learning and has a background in philosophy (M.A.) and computer science/computational linguistics (M.Sc.). He worked in international teams and on cross-sectional projects at the Max Planck Society, most recently as a programmer at the Max Planck Computing and Data Facility and at LMU chairs in ethics.
He was also employed as an associate at PricewaterhouseCoopers (PwC) Munich, working on a requirements catalog for AI systems (Responsible AI, Digital Trust), as a consultant at the German Association of the Automotive Industry (Verband der Automobilindustrie) in Berlin, focusing on ethics and autonomous vehicles, and he provided consulting services to the Dean of the Faculty of Human Sciences at the University of Luxembourg, supporting him set up a Center for Digital Ethics.
This workshop is the first part of our series: ARIC Insights | Responsible AI. Watch out for announcements!