Authors
Michael Hanna
Yonatan Belinkov
Sandro Pezzelle
Date (dd-mm-yyyy)
2025-09-10
Title
Are Formal and Functional Linguistic Mechanisms Dissociated in Language Models?
Journal
Computational Linguistics
Publication Year
2025-09-10
Document type
Article
Abstract
Although large language models (LLMs) are increasingly capable, these capabilities are unevenly distributed: they excel at formal linguistic tasks, such as producing fluent, grammatical text, but struggle more with functional linguistic tasks like reasoning and consistent fact retrieval. Inspired by neuroscience, recent work suggests that to succeed on both formal and functional linguistic tasks, LLMs should use different mechanisms for each; such localization could either be built-in or emerge spontaneously through training. In this paper, we ask: do current models, with fast-improving functional linguistic abilities, exhibit distinct localization of formal and functional linguistic mechanisms? We answer this by finding and comparing the “circuits”, or minimal computational subgraphs, responsible for various formal and functional tasks. Comparing 5 LLMs across 10 distinct tasks, we find that while there is indeed little overlap between circuits for formal and functional tasks, there is also little overlap between formal linguistic tasks, as exists in the human brain. Thus, a single formal linguistic network, unified and distinct from functional task circuits, remains elusive. However, in terms of cross-task faithfulness—the ability of one circuit to solve another’s task—we observe a separation between formal and functional mechanisms, with formal task circuits achieving higher performance on other formal tasks. This suggests the existence of a set of formal linguistic mechanisms that is shared across formal tasks, even if not all mechanisms are strictly necessary for all formal tasks.
URL
go to publisher's site
Permalink
https://hdl.handle.net/11245.1/953baca2-bdf8-490a-925d-008ccadf1d8b