Authors
M.W. Hanna
Ollie Liu
Alexandre Variengien
Date (dd-mm-yyyy)
2023
Title
How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
Publication Year
2023
Publisher
36Neural Information Processing Systems Foundation
Document type
Conference contribution
Abstract
Pre-trained language models can be surprisingly adept at tasks they were not explicitly trained on, but how they implement these capabilities is poorly understood. In this paper, we investigate the basic mathematical abilities often acquired by pre-trained language models. Concretely, we use mechanistic interpretability techniques to explain the (limited) mathematical abilities of GPT-2 small. As a case study, we examine its ability to take in sentences such as "The war lasted from the year 1732 to the year 17", and predict valid two-digit end years (years > 32). We first identify a circuit, a small subset of GPT-2 small's computational graph that computes this task's output. Then, we explain the role of each circuit component, showing that GPT-2 small's final multi-layer perceptrons boost the probability of end years greater than the start year. Finally, we find related tasks that activate our circuit. Our results suggest that GPT-2 small computes greater-than using a complex but general mechanism that activates across diverse contexts.
Permalink
https://hdl.handle.net/11245.1/29578af2-ef5c-4b2e-89ae-0fd8cde31d5d