Abstract:Visual Foundation Models (VFMs), such as DINO and CLIP, excel in semantic understanding of images but exhibit limited spatial reasoning capabilities, which limits their applicability to embodied systems. As a result, recent work incorporates some 3D tasks (such as depth estimation) into VFM training. However, VFM performance remains inconsistent across other spatial tasks, raising the question of whether these models truly have spatial awareness or overfit to specific 3D objectives. To address this question, we introduce the Spatial Relation Recognition Task (SpaRRTa) benchmark, which evaluates the ability of VFMs to identify relative positions of objects in the image. Unlike traditional 3D objectives that focus on precise metric prediction (e.g., surface normal estimation), SpaRRTa probes a fundamental capability underpinning more advanced forms of human-like spatial understanding. SpaRRTa generates an arbitrary number of photorealistic images with diverse scenes and fully controllable object arrangements, along with freely accessible spatial annotations. Evaluating a range of state-of-the-art VFMs, we reveal significant disparities between their spatial reasoning abilities. Through our analysis, we provide insights into the mechanisms that support or hinder spatial awareness in modern VFMs. We hope that SpaRRTa will serve as a useful tool for guiding the development of future spatially aware visual models.
![Figure 1 for Beyond [cls]: Exploring the true potential of Masked Image Modeling representations](/_next/image?url=https%3A%2F%2Ffigures.semanticscholar.org%2Fbbd7b95f369373c48661518f9d0e2d97c488caf2%2F1-Figure1-1.png&w=640&q=75)
![Figure 2 for Beyond [cls]: Exploring the true potential of Masked Image Modeling representations](/_next/image?url=https%3A%2F%2Ffigures.semanticscholar.org%2Fbbd7b95f369373c48661518f9d0e2d97c488caf2%2F8-Table1-1.png&w=640&q=75)
![Figure 3 for Beyond [cls]: Exploring the true potential of Masked Image Modeling representations](/_next/image?url=https%3A%2F%2Ffigures.semanticscholar.org%2Fbbd7b95f369373c48661518f9d0e2d97c488caf2%2F2-Figure2-1.png&w=640&q=75)
![Figure 4 for Beyond [cls]: Exploring the true potential of Masked Image Modeling representations](/_next/image?url=https%3A%2F%2Ffigures.semanticscholar.org%2Fbbd7b95f369373c48661518f9d0e2d97c488caf2%2F13-Table2-1.png&w=640&q=75)
Abstract:Masked Image Modeling (MIM) has emerged as a popular method for Self-Supervised Learning (SSL) of visual representations. However, for high-level perception tasks, MIM-pretrained models offer lower out-of-the-box representation quality than the Joint-Embedding Architectures (JEA) - another prominent SSL paradigm. To understand this performance gap, we analyze the information flow in Vision Transformers (ViT) learned by both approaches. We reveal that whereas JEAs construct their representation on a selected set of relevant image fragments, MIM models aggregate nearly whole image content. Moreover, we demonstrate that MIM-trained ViTs retain valuable information within their patch tokens, which is not effectively captured by the global [cls] token representations. Therefore, selective aggregation of relevant patch tokens, without any fine-tuning, results in consistently higher-quality of MIM representations. To our knowledge, we are the first to highlight the lack of effective representation aggregation as an emergent issue of MIM and propose directions to address it, contributing to future advances in Self-Supervised Learning.