Authors
Arco van Breda
Erman Acar
Date (dd-mm-yyyy)
2026-02-01
Title
Explaining the Explainer: Understanding the Inner Workings of Transformer-based Symbolic Regression Models
Journal
[No source information available]
Publication Year
2026-02-01
Document type
Article
Abstract
Following their success across many domains, transformers have also proven effective for symbolic regression (SR); however, the internal mechanisms underlying their generation of mathematical operators remain largely unexplored. Although mechanistic interpretability has successfully identified circuits in language and vision models, it has not yet been applied to SR. In this article, we introduce PATCHES, an evolutionary circuit discovery algorithm that identifies compact and correct circuits for SR. Using PATCHES, we isolate 28 circuits, providing the first circuit-level characterisation of an SR transformer. We validate these findings through a robust causal evaluation framework based on key notions such as faithfulness, completeness, and minimality. Our analysis shows that mean patching with performance-based evaluation most reliably isolates functionally correct circuits. In contrast, we demonstrate that direct logit attribution and probing classifiers primarily capture correlational features rather than causal ones, limiting their utility for circuit discovery. Overall, these results establish SR as a high-potential application domain for mechanistic interpretability and propose a principled methodology for circuit discovery.
URL
go to publisher's site
Permalink
https://hdl.handle.net/11245.1/81af395a-e80b-40a8-b7d3-3d4003c9392f
Downloads