What and How of Machine Learning Transparency
Building Bespoke Explainability Tools with Interoperable Algorithmic Components
A hands-on session at the 2021 TAILOR Summer School. | |
---|---|
Where: | Virtually via MS Teams. |
When: | Thursday, September 23rd, 2021. |
Table of Contents
About the Session
Surrogate explainability is a popular transparency technique for assessing trustworthiness of predictions output by black-box machine learning models. While such explainers are often presented as monolithic, end-to-end tools, they in fact exhibit high modularity and scope for parameterisation1. This observation suggests that each use case may require a bespoke surrogate built and tuned for the problem at hand. This session introduces the three core components of surrogate explainers: data sampling, interpretable representation and explanation generation in view of text, image and tabular data. By understanding these building blocks individually, as well as their interplay, we can build robust and trustworthy explainers. However, we can also misuse these insights to create technically-valid explainers that are intended to produce misleading justifications of individual predictions. For example, by manipulating the size and distribution of the data sample, and the grouping criteria of the interpretable representation, an automated decision may be shown as fair despite the underlying model being inherently biased. This overview of theory is complemented by a no-code hands-on exercise facilitated through an iPython widget delivered via a Jupyter Notebook.
FAT Forensics (Software)
To support the goals of this session, we employ
FAT Forensics
– an open source Python package
that can inspect selected fairness, accountability and transparency aspects
of data (and their features), models and predictions.
The toolbox spans all of the FAT domains because many of them share underlying
algorithmic components that can be reused in multiple different
implementations, often across the FAT borders.
This interoperability allows, for example, a counterfactual data point
generator to be used as a post-hoc explainer of black-box predictions on
one hand, and as an individual fairness (disparate treatment) inspection tool
on the other.
The modular architecture23 enables
FAT Forensics
to deliver robust and tested
low-level FAT building blocks as well as a collection of FAT tools built on top
of them.
Users can choose from these ready-made tools or, alternatively, combine the
available building blocks to create their own bespoke algorithms without the
need of modifying the code base.
Schedule and Resources
The session lasts for 2 hours. The first part – 1 hour and 15 minutes – introduces surrogate explainers of text, image and tabular data, discussing their pros, cons and modularisation. The next part – 45 minutes – is devoted to a hands-on exercise demonstrating the importance of tabular surrogate parameterisation, data sampling and interpretable representation composition (discretisation of numerical features) in particular.
Duration | Activities | Instructor | Resources |
---|---|---|---|
4.00pm CEST (75 minutes) |
Introduction to modular surrogate explainers. | Kacper Sokol | recording slides demonstration |
5.15pm CEST (45 minutes) |
Hands-on with parameterising tabular surrogate explainers. | Kacper Sokol | Jupyter Notebooks |
Instructors
Kacper Sokol
Kacper is a research associate with the TAILOR project at
the University of Bristol.
His main research focus is transparency – interpretability and
explainability – of machine learning systems.
In particular, he has done work on enhancing transparency of logical predictive
models (and their ensembles) with counterfactual explanations.
Kacper is the designer and lead developer of the
FAT Forensics
package.
- Contact
- K.Sokol@bristol.ac.uk
Peter Flach
Peter is a Professor of Artificial Intelligence at the University of Bristol. His research interests include mining highly structured data, the evaluation and improvement of machine learning models, and human-centred AI. Peter recently stepped down as Editor-in-Chief of the Machine Learning journal, is President of the European Association for Data Science, and has published several books including “Machine Learning: The Art and Science of Algorithms that Make Sense of Data” (Cambridge University Press, 2012).
- Contact
- Peter.Flach@bristol.ac.uk
References
-
Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, and Peter Flach. 2019. bLIMEy: Surrogate Prediction Explanations Beyond LIME. 2019 Workshop on Human-Centric Machine Learning (HCML 2019) at the 33rd Conference on NeuralInformation Processing Systems (NeurIPS 2019), Vancouver, Canada (2019). https://arxiv.org/abs/1910.13016 ↩
-
Kacper Sokol, Alexander Hepburn, Rafael Poyiadzi, Matthew Clifford, Raul Santos-Rodriguez, and Peter Flach. 2020. FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems. Journal of Open Source Software, 5(49), p.1904. https://joss.theoj.org/papers/10.21105/joss.01904 ↩
-
Kacper Sokol, Raul Santos-Rodriguez, and Peter Flach. 2019. FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency. arXiv preprint arXiv:1909.05167. https://arxiv.org/abs/1909.05167 ↩