A fundamental aspect of the semantics of natural language is that novel meanings can be formed from the composition of previously
known parts.Vision-language models (VLMs) have made significant progress in recent years, however, there is evidence that
they are unable to perform this kind of composition. For example, given an image of a red cube and a blue cylinder, a VLM
such as CLIP is likely to incorrectly label the image as a red cylinder or a blue cube, indicating it represents the image
as a ‘bag-of-words’ and fails to capture compositional semantics. Diffusion models have recently gained significant attention
for their impressive generative abilities, and zero-shot classifiers based on diffusion models have been shown to perform
competitively with CLIP in certain compositional tasks. We explore whether the generative Diffusion Classifier has improved
compositional generalisation abilities compared to discriminative models. We assess three models—Diffusion Classifier, CLIP,
and ViLT—on their ability to bind objects with attributes and relations in both zero-shot learning (ZSL) and generalised zero-shot
learning (GZSL) settings. Our results show that the Diffusion Classifier and ViLT perform well at concept binding tasks, but
that all models struggle significantly with the relational GZSL task, underscoring the broader challenges VLMs face with relational
reasoning. Analysis of CLIP embeddings suggests that the difficulty may stem from overly similar representations of relational
concepts such as left and right. Code and dataset are available at [link redacted for anonymity].