Robert Kilian

ARIC asks… | Robert Kilian on responsible AI

This post has been translated by an AI and may contain translational inaccuracies.
In unserer neuen Reihe „ARIC fragt...” führen wir Kurzinterviews mit interessanten Persönlichkeiten aus der KI-Branche und beginnen dabei mit Robert Kilian, dem Geschäftsführer von CertifAI, einem KI-Zertifizierungs- und Softwareanbieter für KI-Systeme. Seine Keynote zum Thema Software-Testing und risikobasierte Regulierung als Grundlage für das Vertrauen in KI streamen wir live auf dem ARIC Linkedin-Kanal.

 

ARIC: What is CertifAI? What exactly does CertifAI do?

Robert Kilian: CertifAI is a testing and inspection company for AI systems that was founded as a joint venture between PwC, DEKRA and the City of Hamburg. With our experts in AI technology and regulation, we are able to evaluate AI systems according to criteria such as robustness, security and fairness, as well as their compliance with regulatory requirements. Ultimately, we ensure quality assurance, particularly in the target sectors of automotive, banking, smart manufacturing and healthcare. To this end, we develop and operate software for testing AI systems and carry out audits to test and certify these systems.

 

In your opinion, when is an AI responsible?

We define trustworthiness of AI along seven dimensions:

  • Fairness
  • Autonomy & control
  • Reliability
  • Data protection
  • Transparency
  • Security
  • Sustainability

In this context, for example, the fairness of an AI system is primarily reflected in the data for its training and testing. This must be balanced and representative in order to avoid subsequent discrimination when using the AI. Human autonomy and control relate to the distribution of tasks between humans and AI and the associated empowerment of AI users.

In addition, our model for reliable AI development follows a four-stage approach, which also determines how we test AI systems. It consists of a reliable development process, a targeted risk analysis, ODD-based edge case tests and statistical evidence of error rates.

 


Dr. Robert Kilian is CEO and Managing Director of CertifAI, a testing and certification provider for AI systems. He is the founder of the data analysis company Beams, teaches at Humboldt University in Berlin on the subject of AI regulation and is a board member of the German AI Association. He is also a member of the supervisory board at N26, among others, where he was responsible for all regulatory issues as Chief Representative and General Counsel from 2015 to 2020. Robert Kilian was also a long-standing member of the FinTech Council of the German Federal Ministry of Finance and a founding member and first President of the European FinTech Association (EFA). He began his professional career in M&A at the law firm Hengeler Mueller.


 

What opportunities does a certification system offer?

Firstly, the certification of AI systems – especially those that involve high risks – is crucial in order to strengthen trust in AI. After all, a certificate is nothing more than an independent seal of approval that confirms the system’s conformity with certain legal or non-binding quality standards. Secondly, a certification system ensures that AI systems function reliably. If AI systems have to undergo independent testing before they are placed on the market, we get more efficient systems that are actually ready for the market. Thirdly, a certification system provides legal certainty for the developing AI companies. They can be confident that their systems meet the legal requirements if they comply with certain standards that are required for certification. Certifications therefore increase the willingness to invest in AI development and promote innovation. European legislators have also taken up the aforementioned advantages of a certification system and, with the AI Act, have made a conformity assessment of AI systems in high-risk application areas a prerequisite for the market approval of such systems. We are currently seeing similar developments in every major industrial region around the world.

 

Which AI topic should we talk more about?

On the technical standards and harmonized norms currently being developed in connection with the testing of AI systems. Because they show how we can achieve product safety and thus trust in the technology. With sufficient participation via the standardization organizations, this can also be a real advantage for companies.

 


With our interviews, we want to introduce you to different perspectives and players in the field of AI. The positions of our interview partners do not necessarily reflect the positions of the ARIC.


Further interviews: