How can artificial intelligence become trustworthy?

How can artificial intelligence become trustworthy?

Free entry

Artificial intelligence is used in many areas of our society. Understanding how this affects our lives requires knowledge from many disciplines. The more widespread AI becomes, the less we feel we can understand and control it. With the lecture series "XAI in Society", the research project Explainable Intelligent Systems (EIS) and the DFG Collaborative Research Center for Perspicuous Computing (CPEC) provide an insight into current research on explainable artificial intelligence (XAI). The lectures address AI from a psychological, ethical and legal perspective and deal with the question of trustworthy AI and the EU's AI law. The lecture series invites you to discuss the future of AI.

Program overview

December 5, 2024 - Nadine Schlicker, Philipps-Universität Marburg, Illuminating the complexities of Trustworthy AI with the help of the Trustworthiness Assessment Model (TrAM)

January 9, 2025 - Prof. Dr. Holger Hermanns, Saarland University, Does the European AI Act tame the AI Monster?

February 6, 2025 - Dr. Kevin Baum, German Research Center for Artificial Intelligence, From Explanations to Justifications: Trust and Responsibility Despite Moral Uncertainty

Other events that you might like

UKF_UNI_Planetary_Health_Ring_Vorlesung_A1_Plakat_2024_2025_3.pdf (Präsentation (43)) (6)

Lectures, readings & discussions

Lecture series "Planetary Health"

27/01/2025 / 06:15 PM / Universitätsklinikum Freiburg - Seminarraum Zentrum für Translationale Zellforschung

The "Planetary Health" lecture series starting on Monday, November 11, 2024 will shed light on the consequences of the climate crisis and offer solutions for the environment and health. It comprises…

Return to overview