FAT Forensic Events

An interactive presentation given at Università della Svizzera italiana, discussing how to interpret and understand insights generated with surrogate explainers

Learn more about FAT Forensics: Source Code Documentation

Resources: Slides Slack

Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding

An interactive presentation given at Università della Svizzera italiana, discussing how to interpret and understand insights generated with surrogate explainers.
Where: Faculty of Informatics, Università della Svizzera italiana, Lugano, Switzerland.
When: 1.15–2.15pm on Tuesday, October 11th, 2022.

Table of Contents

About the Presentation

Myriad approaches exist to help humans peer inside automated decision-making systems based on artificial intelligence and machine learning algorithms. These tools and the insights they produce, however, tend to be complex socio-technological constructs themselves, hence subject to technical limitations as well as human biases and (possibly ill-defined) preferences. Under these conditions, how can we ensure that explanations are meaningful and fulfil their role by leading to understanding?

In this talk I will provide a high-level introduction to and overview of explainable AI and interpretable ML, followed by a deep dive into practical aspects of a popular explainability algorithm. I will demonstrate how different configurations of an explainer – that is often presented as a monolithic, end-to-end tool – may impact the resulting insights. I will then show the importance of the strategy employed to present them to a user, arguing in favour of a clear separation between the technical and social aspects of such techniques. Importantly, understanding these dependencies can help us to build bespoke explainers that are robust, reliable, trustworthy and suitable for the unique problem at hand.

FAT Forensics (Software)

To support the goals of this presentation, we employ FAT Forensics – an open source Python package that can inspect selected fairness, accountability and transparency aspects of data (and their features), models and predictions. The toolbox spans all of the FAT domains because many of them share underlying algorithmic components that can be reused in multiple different implementations, often across the FAT borders. This interoperability allows, for example, a counterfactual data point generator to be used as a post-hoc explainer of black-box predictions on one hand, and as an individual fairness (disparate treatment) inspection tool on the other. The modular architecture12 enables FAT Forensics to deliver robust and tested low-level FAT building blocks as well as a collection of FAT tools built on top of them. Users can choose from these ready-made tools or, alternatively, combine the available building blocks to create their own bespoke algorithms without the need of modifying the code base.

Resources

The presentation is provided as an interactive set of slides utilising iPyWidgets and built with a Jupyter Notebook based on RISE and reveal.js. This notebook (hence the presentation) can be executed locally on one’s own machine or launched directly in the web browser through Google Colab or MyBidner.

Resources
Slides

Instructors

Kacper Sokol

Kacper is a Research Fellow at the ARC Centre of Excellence for Automated Decision-Making and Society, affiliated with the School of Computing Technologies at RMIT University, Australia; and an Honorary Research Fellow at the Intelligent Systems Laboratory, University of Bristol, UK. His main research focus is explainability – transparency and interpretability – of data-driven predictive systems based on artificial intelligence and machine learning algorithms. Kacper holds a Master’s degree in Mathematics and Computer Science, and a doctorate in Computer Science from the University of Bristol. Throughout his career Kacper was a visiting researcher at the University of Tartu, Estonia as well as the Simons Institute for the Theory of Computing at UC Berkeley, USA; prior to joining RMIT he has worked with numerous projects at the University of Bristol. In his research, Kacper collaborated with industry partners such as THALES, and provided consulting services in explainable artificial intelligence and transparent machine learning to companies such as Airbus.

Contact
Kacper.Sokol@rmit.edu.au

References

  1. Kacper Sokol, Alexander Hepburn, Rafael Poyiadzi, Matthew Clifford, Raul Santos-Rodriguez, and Peter Flach. 2020. FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems. Journal of Open Source Software, 5(49), p.1904. https://joss.theoj.org/papers/10.21105/joss.01904 

  2. Kacper Sokol, Raul Santos-Rodriguez, and Peter Flach. 2022. FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency. Software Impacts, 14, p.100406. https://doi.org/10.1016/j.simpa.2022.100406