Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey

What is interpretability and what approaches are there?

Interpretability is an area of alignment research that aims to make machine learning (ML) systems easier for humans to understand.

It's hard for us to understand current AI systems because the computations of current ML systems often depend on billions of parameters which they learned from data. Some approaches to interpretability are mechanistic interpretability, finding important input features, explaining by examples, natural language explanations, and using ML architectures which are intrinsically interpretable.

Mechanistic interpretability is about interpreting an ML model’s internal representations. One way to do this is activation maximization: optimizing the input so that one particular neuron is activated as strongly as possible. This optimized input is an indicator of the concept that the neuron represents. Work that is central for mechanistic interpretability is the circuits thread, which focuses on interpreting the algorithms implemented by subgraphs (circuits) of neural networks. There is also work on circuits in transformers in particular. One drawback of mechanistic interpretability is that interpretability is not composable, i.e.understanding each individual component of a system does not necessarily allow us to understand the system as a whole. However, there may be a way of hierarchically decomposing a system that allows us to understand each layer of abstraction of it, and thus understand the whole.

Feature visualization of a neuron that corresponds to dog-like features (image source)

Finding important input features means figuring out which input features are most relevant in determining the system's output. In the case of image classification, we can "highlight" parts of the image based on their degree of relevancy to produce a saliency map. One way to do this is to take the derivative of the output with regard to the different parts of the input. This derivative denotes how much the output changes if we change a particular part of the input, i.e. how important that part of the input is for the output.

Saliency maps can help us notice cases in which an image classifier has learned to use features it shouldn't be using for a given task. For example, the paper Unmasking Clever Hans predictors and assessing what machines really learn uses saliency maps to show that an image classifier trained to identify horses was not using the parts of an image that contained the horse at all, but instead was relying on the name of the photographer printed in a corner, because one of the photographers whose photos were included in the training data primarily took photos of horses.

However, many common saliency methods fail basic sanity checks (such as saliency maps barely changing when the model weights are randomized). Therefore, saliency maps are not sufficient for a reliable understanding of ML systems.

Explanation by examples means showing examples in the training data that have similar features, such as in the paper This Looks Like That: Deep Learning for Interpretable Image Recognition.

Natural language explanations are sentences describing a model’s reasons for its outputs. For example, in the paper Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative Tasks, a human and an AI play a virtual cooking game together, and the AI explains its plans in natural language. The paper finds that these explanations significantly improve the human-AI team's performance.

Finding important features, explanation by examples, and natural language explanations are post-hoc explanations: they are generated after the fact, and are therefore likely to not be faithful (i.e. not accurately describe a model’s decision process). Interpretable architectures are simple enough to understand without additional tools. Cynthia Rudin has argued for using interpretable architectures in high-stakes situations. However, current interpretable architectures usually perform worse than other architectures at the same tasks.

Further reading:



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.