We introduce two new benchmarks REST and REST+(Render-Equivalence Stress Tests) to enable systematic evaluation of cross-modal
inconsistency in multimodal large language models (MLLMs). MLLMs are trained to represent vision and language in the same
embedding space, yet they cannot perform the same tasks in both modalities. Our benchmarks contain samples with the same semantic
information in three modalities (image, text, mixed) and we show that state-of-the-art MLLMs cannot consistently reason over
these different modalities. We evaluate 15 MLLMs and find that the degree of modality inconsistency varies substantially,
even when accounting for problems with text recognition (OCR). Neither rendering text as image nor rendering an image as text
solves the inconsistency. Even if OCR is correct, we find that visual characteristics (text colour and resolution, but not
font) and the number of vision tokens have an impact on model performance. Finally, we find that our c