FAT Forensic Events

An interactive presentation for the AI and Humanity summer cluster workshop, discussing how machine learning explainers can help humans to understand automated decision-making

Learn more about FAT Forensics: Source Code Documentation

Resources: Recordings Slides Slack

Where Does the Understanding Come From When Explaining Automated Decision-making Systems?

An interactive presentation discussing how to interpret machine learning explanations for the AI and Humanity Summer Cluster Workshop.
Where: The Simons Institute for the Theory of Computing, UC Berkeley, California, US.
When: 11.15–11.45am on Thursday, July 14th, 2022.

Table of Contents

About the Presentation

A myriad of approaches exists to help us peer inside automated decision-making systems based on artificial intelligence and machine learning algorithms. These tools and their insights, however, are socio-technological constructs themselves, hence subject to human biases and preferences as well as technical limitations. Under these conditions, how can we ensure that explanations are meaningful and fulfil their role by leading to understanding? In this talk I will demonstrate how different configurations of an explainability algorithm may impact the resulting insights and show the importance of the strategy employed to present them to the user, arguing in favour of a clear separation between the technical and social aspects of such tools.

FAT Forensics (Software)

To support the goals of this demonstration, we employ FAT Forensics – an open source Python package that can inspect selected fairness, accountability and transparency aspects of data (and their features), models and predictions. The toolbox spans all of the FAT domains because many of them share underlying algorithmic components that can be reused in multiple different implementations, often across the FAT borders. This interoperability allows, for example, a counterfactual data point generator to be used as a post-hoc explainer of black-box predictions on one hand, and as an individual fairness (disparate treatment) inspection tool on the other. The modular architecture12 enables FAT Forensics to deliver robust and tested low-level FAT building blocks as well as a collection of FAT tools built on top of them. Users can choose from these ready-made tools or, alternatively, combine the available building blocks to create their own bespoke algorithms without the need of modifying the code base.

Resources

The presentation is provided as an interactive set of slides utilising iPyWidgets and built with a Jupyter Notebook based on RISE and reveal.js. This notebook (hence the presentation) can be executed locally on one’s own machine or launched directly in the web browser through Google Colab or MyBidner. The recording of the talk is available on YouTube.

Resources
Slides
Recording

Instructors

Kacper Sokol

Kacper is a research fellow with The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at the RMIT University. His main research focus is transparency – interpretability and explainability – of machine learning systems. In particular, he has done work on enhancing transparency of logical predictive models (and their ensembles) with counterfactual explanations. Kacper is the designer and lead developer of the FAT Forensics package.

Contact
Kacper.Sokol@rmit.edu.au

References

  1. Kacper Sokol, Alexander Hepburn, Rafael Poyiadzi, Matthew Clifford, Raul Santos-Rodriguez, and Peter Flach. 2020. FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems. Journal of Open Source Software, 5(49), p.1904. https://joss.theoj.org/papers/10.21105/joss.01904 

  2. Kacper Sokol, Raul Santos-Rodriguez, and Peter Flach. 2019. FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency. arXiv preprint arXiv:1909.05167. https://arxiv.org/abs/1909.05167