Paper List
-
A Theoretical Framework for the Formation of Large Animal Groups: Topological Coordination, Subgroup Merging, and Velocity Inheritance
This paper addresses the core problem of how large, coordinated animal groups form in nature, challenging the classical view of gradual aggregation by...
-
CONFIDE: Hallucination Assessment for Reliable Biomolecular Structure Prediction and Design
This paper addresses the critical limitation of current protein structure prediction models (like AlphaFold3) where high-confidence scores (pLDDT) can...
-
Generative design and validation of therapeutic peptides for glioblastoma based on a potential target ATP5A
This paper addresses the critical bottleneck in therapeutic peptide design: how to efficiently optimize lead peptides with geometric constraints while...
-
Pharmacophore-based design by learning on voxel grids
This paper addresses the computational bottleneck and limited novelty in conventional pharmacophore-based virtual screening by introducing a voxel cap...
-
Human-Centred Evaluation of Text-to-Image Generation Models for Self-expression of Mental Distress: A Dataset Based on GPT-4o
This paper addresses the critical gap in evaluating how AI-generated images can effectively support cross-cultural mental distress communication, part...
-
ANNE Apnea Paper
This paper addresses the core challenge of achieving accurate, event-level sleep apnea detection and characterization using a non-intrusive, multimoda...
-
DeeDeeExperiment: Building an infrastructure for integrating and managing omics data analysis results in R/Bioconductor
This paper addresses the critical bottleneck of managing and organizing the growing volume of differential expression and functional enrichment analys...
-
Cross-Species Antimicrobial Resistance Prediction from Genomic Foundation Models
This paper addresses the core challenge of predicting antimicrobial resistance across phylogenetically distinct bacterial species, where traditional m...
Human-like Object Grouping in Self-supervised Vision Transformers
Zuckerman Mind Brain Behavior Institute, Columbia University | Department of Social Science and AI, Hankuk University of Foreign Studies | Nanyang Technological University | University of Hong Kong | Stony Brook University
30秒速读
IN SHORT: This paper addresses the core challenge of quantifying how well self-supervised vision models capture human-like object grouping in natural scenes, bridging the gap between computational representations and behavioral psychophysics.
核心创新
- Methodology Introduces a large-scale behavioral benchmark (1,020 trials) scaling up classical psychophysics to natural images, enabling quantitative comparison between model representations and human object perception.
- Methodology Proposes a novel object-centric metric based on ROC analysis of patch-level affinity maps that quantifies object boundary alignment without requiring object-level supervision.
- Biology Demonstrates that Gram matrix structure, capturing patch similarity patterns, is a key mechanism driving perceptual alignment between self-supervised models and human vision.
主要结论
- Self-supervised Transformer models trained with DINO objectives show strongest alignment with human behavior, with DINOv3 ViT-B achieving 91.9% grouping accuracy and highest noise-normalized Spearman correlation (Fig. 4A).
- Object-centric structure in patch representations, quantified by ROC AUC, strongly predicts behavioral alignment across models (correlation shown in Fig. 6B), with DINO-based models consistently outperforming supervised counterparts.
- Gram matrix distillation improves supervised models' alignment with human behavior, converging with independent evidence that Gram anchoring enhances DINOv3's feature quality.
摘要: Vision foundation models trained with self-supervised objectives achieve strong performance across diverse tasks and exhibit emergent object segmentation properties. However, their alignment with human object perception remains poorly understood. Here, we introduce a behavioral benchmark in which participants make same/different object judgments for dot pairs on naturalistic scenes, scaling up a classical psychophysics paradigm to over 1000 trials. We test a diverse set of vision models using a simple readout from their representations to predict subjects’ reaction times. We observe a steady improvement across model generations, with both architecture and training objective contributing to alignment, and transformer-based models trained with the DINO self-supervised objective showing the strongest performance. To investigate the source of this improvement, we propose a novel metric to quantify the object-centric component of representations by measuring patch similarity within and between objects. Across models, stronger object-centric structure predicts human segmentation behavior more accurately. We further show that matching the Gram matrix of supervised transformer models, capturing similarity structure across image patches, with that of a self-supervised model through distillation improves their alignment with human behavior, converging with the prior finding that Gram anchoring improves DINOv3’s feature quality. Together, these results demonstrate that self-supervised vision models capture object structure in a behaviorally human-like manner, and that Gram matrix structure plays a role in driving perceptual alignment.