Aug 14 – 18, 2023
Europe/Berlin timezone

Opening the "Black Box" with Explainable AI (XAI)

Aug 15, 2023, 10:50 AM
20m
Jupiter

Jupiter

Speaker

Dr Johannes Novotny (VRVis)

Description

Be it recreation, medicine or the judicial system, state-of-the-art machine-learning (SOTA) methods, colloquially called Artificial Intelligence (AI), permeate our daily lives. Increasingly complex AI systems now conquer new sophisticated tasks on a weekly basis, yet our conceptual understanding of how these SOTA methods work is not keeping pace with their ever-improving abilities [1].
While individual parts of AI systems and their underlying neural networks are well understood, the inner workings of a system trained for a specific task often become opaque, due to the massive complexity of the interplay between internal components. This turns AI systems into “black boxes” which perform their tasks remarkably well, while leaving the question of how any given result was obtained unanswered.
Explainable AI (XAI) aims to address this problem by presenting the user with an easily understood chain of reasoning from the user’s order, through the system’s knowledge and inference, to the resulting behavior [2]. This talk discusses core approaches from the AI and visualization communities to generate explanations, but also to identify what a given AI systems has learned. Based on this, we show how XAI relates to the concepts of fairness, trustworthiness and reliability, to enable the safe use of AI in sensitive applications which might pave the way to an age of digital humanism [3].

References

[1] Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, and Chudi Zhong, “Interpretable machine learning: Fundamental principles and 10 grand challenges,” Statistics Surveys, vol. 16, no.none, 2022.
[2] Michael van Lent, William Fisher, and Michael Mancuso, “An explainable artificial intelligence system for small-unit tactical behavior,” in Proceedings of the 16th Conference on Innovative Applications of Artifical Intelligence. 2004, IAAI’04, p. 900–907, AAAI Press.
[3] Hannes Werthner, Allison Stanger, Viola Schiaffonati, Peter Knees, Lynda Hardman, and Carlo Ghezzi, “Digital humanism: The time is now,” Computer, vol. 56, no. 1, pp. 138–142, 2023.

Keywords AI, Machine Learning, Ethiks

Primary author

Presentation materials

There are no materials yet.