Paper List

期刊: ArXiv Preprint
发布日期: 2026-03-14
Computer VisionComputational Neuroscience

Human-like Object Grouping in Self-supervised Vision Transformers

Zuckerman Mind Brain Behavior Institute, Columbia University | Department of Social Science and AI, Hankuk University of Foreign Studies | Nanyang Technological University | University of Hong Kong | Stony Brook University

Hossein Adeli, Seoyoung Ahn, Andrew Luo, Mengmi Zhang, Nikolaus Kriegeskorte, Gregory Zelinsky
Figure
Figure
Figure
Figure
Figure

30秒速读

IN SHORT: This paper addresses the core challenge of quantifying how well self-supervised vision models capture human-like object grouping in natural scenes, bridging the gap between computational representations and behavioral psychophysics.

核心创新

  • Methodology Introduces a large-scale behavioral benchmark (1,020 trials) scaling up classical psychophysics to natural images, enabling quantitative comparison between model representations and human object perception.
  • Methodology Proposes a novel object-centric metric based on ROC analysis of patch-level affinity maps that quantifies object boundary alignment without requiring object-level supervision.
  • Biology Demonstrates that Gram matrix structure, capturing patch similarity patterns, is a key mechanism driving perceptual alignment between self-supervised models and human vision.

主要结论

  • Self-supervised Transformer models trained with DINO objectives show strongest alignment with human behavior, with DINOv3 ViT-B achieving 91.9% grouping accuracy and highest noise-normalized Spearman correlation (Fig. 4A).
  • Object-centric structure in patch representations, quantified by ROC AUC, strongly predicts behavioral alignment across models (correlation shown in Fig. 6B), with DINO-based models consistently outperforming supervised counterparts.
  • Gram matrix distillation improves supervised models' alignment with human behavior, converging with independent evidence that Gram anchoring enhances DINOv3's feature quality.
研究空白: Previous research has focused on low-level Gestalt cues and simple stimuli, lacking systematic evaluation of how modern vision foundation models align with human object perception in complex natural scenes.

摘要: Vision foundation models trained with self-supervised objectives achieve strong performance across diverse tasks and exhibit emergent object segmentation properties. However, their alignment with human object perception remains poorly understood. Here, we introduce a behavioral benchmark in which participants make same/different object judgments for dot pairs on naturalistic scenes, scaling up a classical psychophysics paradigm to over 1000 trials. We test a diverse set of vision models using a simple readout from their representations to predict subjects’ reaction times. We observe a steady improvement across model generations, with both architecture and training objective contributing to alignment, and transformer-based models trained with the DINO self-supervised objective showing the strongest performance. To investigate the source of this improvement, we propose a novel metric to quantify the object-centric component of representations by measuring patch similarity within and between objects. Across models, stronger object-centric structure predicts human segmentation behavior more accurately. We further show that matching the Gram matrix of supervised transformer models, capturing similarity structure across image patches, with that of a self-supervised model through distillation improves their alignment with human behavior, converging with the prior finding that Gram anchoring improves DINOv3’s feature quality. Together, these results demonstrate that self-supervised vision models capture object structure in a behaviorally human-like manner, and that Gram matrix structure plays a role in driving perceptual alignment.