Extremely low-resource (XLR) languages lack substantial corpora for training NLP models, motivating the use of all available
resources such as dictionaries and grammar books. Machine Translation from One Book (Tanzer et al., 2024) suggests that prompting
long-context LLMs with one grammar book enables English–Kalamang translation, an XLR language unseen by LLMs—a noteworthy
case of linguistics helping an NLP task. We investigate the source of this translation ability, finding almost all improvements
stem from the book’s parallel examples rather than its grammatical explanations. We find similar results for Nepali and Guarani,
seen low-resource languages, and we achieve performance comparable to an LLM with a grammar book by simply fine-tuning an
encoder-decoder translation model. We then investigate where grammar books help by testing two linguistic tasks, grammaticality
judgment and gloss prediction, and we explore what kind of grammatical knowledge helps by introducing a typological feature
prompt that achieves leading results on these more relevant tasks. We thus emphasise the importance of task-appropriate data
for XLR languages: parallel examples for translation, and grammatical data for linguistic tasks. As we find no evidence that
long-context LLMs can make effective use of grammatical explanations for XLR translation, we conclude data collection for
multilingual XLR tasks such as translation is best focused on parallel data over linguistic description.