FAT Forensic Events

An overview of machine learning explainability for the EURO 2021 special session on Fair and Explainable Models

Learn more about FAT Forensics: Source Code Documentation

Resources: Slides Slack

Making Machine Learning Explanations Truthful and Intelligible

An interactive overview of machine learning explainability with focus on robustness of surrogate explainers for image and tabular data for the special session on Fair and Explainable Models held at the 31st European Conference on Operational Research.
Where: Virtual Room 32, University of West Attica, Athens, Greece.
When: 2.30–3.00pm on Monday, July 12th, 2021.

Table of Contents

About the Invited Talk

Predictive models come with a myriad of well-defined performance metrics that guide us through their development, validation and deployment. While the multiplicity of these measurements poses challenges in itself, the lack of agreed-upon evaluation criteria in machine learning explainability creates even more fundamental issues. For one, transparency of predictive algorithms tends to be elusive and notoriously difficult to measure. Without universal and objective interpretability metrics, our evaluation of such systems may be subject to personal preferences exhibited by “I know it when I see it” attitude and human cognitive biases, for example, the illusory truth effect and the confirmation bias. Resorting to user studies – considered the field’s gold standard – may not be of much help either when the assumptions of test and deployment environments are misaligned.

Shall we take machine learning explanations at their face value? What to do when we are shown multiple, possibly conflicting, explanations? What prior (technical) knowledge do we need to appreciate their insights and limitations? With all of these questions and not many definitive answers, how do we go beyond naive compliance with legal frameworks such as the GDPR? In this talk I will show how to identify obscure assumptions and overcome inherent limitations of black-box explainers to generate truthful and intelligible insights that can be harnessed to satisfy our scientific curiosity and create business value.


In particular, this talk will look into surrogate explainability, which is a popular transparency technique for assessing trustworthiness of predictions output by black-box machine learning models. While such explainers are often presented as monolithic, end-to-end tools, they in fact exhibit high modularity and scope for parameterisation1. This observation suggests that each use case may require a bespoke surrogate built and tuned for the problem at hand. To this end, the talk will overview the influence of parameterisation and configuration of surrogates on the explanations that they generate for tabular and image data. More precisely, it will demonstrate the significance of segmentation granularity and super-pixel occlusion colour for images, as well as discretisation and binarisation of continuous features for tabular data. Understanding these dependencies can help with building robust and trustworthy surrogate explainers, whose insights can be relied upon.

FAT Forensics (Software)

To support the goals of this invited talk, we employ FAT Forensics – an open source Python package that can inspect selected fairness, accountability and transparency aspects of data (and their features), models and predictions. The toolbox spans all of the FAT domains because many of them share underlying algorithmic components that can be reused in multiple different implementations, often across the FAT borders. This interoperability allows, for example, a counterfactual data point generator to be used as a post-hoc explainer of black-box predictions on one hand, and as an individual fairness (disparate treatment) inspection tool on the other. The modular architecture23 enables FAT Forensics to deliver robust and tested low-level FAT building blocks as well as a collection of FAT tools built on top of them. Users can choose from these ready-made tools or, alternatively, combine the available building blocks to create their own bespoke algorithms without the need of modifying the code base.

Resources

The presentation is provided as an interactive set of slides utilising iPyWidgets and built with a Jupyter Notebook based on RISE and reveal.js. This notebook (hence the presentation) can be executed locally on one’s own machine or launched directly in the web browser through Google Colab or MyBidner.

Resources
Slides

Instructors

Kacper Sokol

Kacper is a research associate with the TAILOR project at the University of Bristol. His main research focus is transparency – interpretability and explainability – of machine learning systems. In particular, he has done work on enhancing transparency of logical predictive models (and their ensembles) with counterfactual explanations. Kacper is the designer and lead developer of the FAT Forensics package.

Contact
K.Sokol@bristol.ac.uk

References

  1. Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, and Peter Flach. 2019. bLIMEy: Surrogate Prediction Explanations Beyond LIME. 2019 Workshop on Human-Centric Machine Learning (HCML 2019) at the 33rd Conference on NeuralInformation Processing Systems (NeurIPS 2019), Vancouver, Canada (2019). https://arxiv.org/abs/1910.13016 

  2. Kacper Sokol, Alexander Hepburn, Rafael Poyiadzi, Matthew Clifford, Raul Santos-Rodriguez, and Peter Flach. 2020. FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems. Journal of Open Source Software, 5(49), p.1904. https://joss.theoj.org/papers/10.21105/joss.01904 

  3. Kacper Sokol, Raul Santos-Rodriguez, and Peter Flach. 2019. FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency. arXiv preprint arXiv:1909.05167. https://arxiv.org/abs/1909.05167