PhD thesis: "Explaining Artificial Intelligence With Tailored Interactive Visualisations"

Date

This thesis is the result of four years of hard work and was awarded with the highest distinction and congratulations of the exam committee (cum laude) at KU Leuven.

The text starts with an extensive background section in accessible language and then presents 6 chapters about my research in a traditional academic format. To "humanise" my work, chapters conclude with anecdotes and photographs that give a glimpse of what happened while I was working on them. The presentation introduces the thesis to a broad audience.

Cover thesis

Recording of public defence

Summary

The rise of "big data" and artificial intelligence (AI) in countless application domains comes with tremendous opportunities, but also entails challenges concerning transparency and controllability. Well-performing AI models are often "black boxes," which means that understanding how they establish outcomes is hard or even infeasible. Researchers in explainable AI (XAI) therefore develop algorithm-centred and human-centred methods that try to give people insights into the reasoning process of AI models. In turn, the expectation is this allows people to better understand and trust AI models, and thus make better-informed decisions. However, the body of experimental human-centred research that backs up these expectations is limited. In addition, it is unclear whether XAI techniques meet the insights required by different user groups across application domains and contexts in the first place. Thus, XAI studies with actual people and real-world data are urgent.

Our work focuses on designing, implementing, and evaluating visualisation-supported explanations for AI systems in healthcare, agrifood, and education. Following human-centred research practices, we study three research questions: (1) How can visual explanations tailored to a target audience and application domain make AI models more transparent?; (2) How can people control AI models with additional feedback, supported by interactive visual explanations?; and (3) How do visual explanations and control affect people's perceptions of AI systems, e.g., in terms of appropriate trust and understanding their outcomes?

Overall, we show how explainability can be established through visual analytics, visualisation-supported justification, and visualisation-supported control. We do this by reviewing the existing literature, developing new visual explanations and control mechanisms in close collaboration with real end-users of AI systems, and conducting user studies to better understand how our explainability methods affect people's perceptions of AI systems. Our work demonstrates the value of human-centred and interdisciplinary research to design XAI solutions that align with people's needs and truly augment human capabilities with AI.

Diagram of 6 screenshots of visualisation-based explanations