Paper List
-
Discovery of a Hematopoietic Manifold in scGPT Yields a Method for Extracting Performant Algorithms from Biological Foundation Model Internals
This work addresses the core challenge of extracting reusable, interpretable, and high-performance biological algorithms from the opaque internal repr...
-
MS2MetGAN: Latent-space adversarial training for metabolite–spectrum matching in MS/MS database search
This paper addresses the critical bottleneck in metabolite identification: the generation of high-quality negative training samples that are structura...
-
Toward Robust, Reproducible, and Widely Accessible Intracranial Language Brain-Computer Interfaces: A Comprehensive Review of Neural Mechanisms, Hardware, Algorithms, Evaluation, Clinical Pathways and Future Directions
This review addresses the core challenge of fragmented and heterogeneous evidence that hinders the clinical translation of intracranial language BCIs,...
-
Less Is More in Chemotherapy of Breast Cancer
通过纳入细胞周期时滞和竞争项,解决了现有肿瘤-免疫模型的过度简化问题,以定量比较化疗方案。
-
Fold-CP: A Context Parallelism Framework for Biomolecular Modeling
This paper addresses the critical bottleneck of GPU memory limitations that restrict AlphaFold 3-like models to processing only a few thousand residue...
-
Open Biomedical Knowledge Graphs at Scale: Construction, Federation, and AI Agent Access with Samyama Graph Database
This paper addresses the core pain point of fragmented biomedical data by constructing and federating large-scale, open knowledge graphs to enable sea...
-
Predictive Analytics for Foot Ulcers Using Time-Series Temperature and Pressure Data
This paper addresses the critical need for continuous, real-time monitoring of diabetic foot health by developing an unsupervised anomaly detection fr...
-
Hypothesis-Based Particle Detection for Accurate Nanoparticle Counting and Digital Diagnostics
This paper addresses the core challenge of achieving accurate, interpretable, and training-free nanoparticle counting in digital diagnostic assays, wh...
Human-like Object Grouping in Self-supervised Vision Transformers
Zuckerman Mind Brain Behavior Institute, Columbia University | Department of Social Science and AI, Hankuk University of Foreign Studies | Nanyang Technological University | University of Hong Kong | Stony Brook University
30秒速读
IN SHORT: This paper addresses the core challenge of quantifying how well self-supervised vision models capture human-like object grouping in natural scenes, bridging the gap between computational representations and behavioral psychophysics.
核心创新
- Methodology Introduces a large-scale behavioral benchmark (1,020 trials) scaling up classical psychophysics to natural images, enabling quantitative comparison between model representations and human object perception.
- Methodology Proposes a novel object-centric metric based on ROC analysis of patch-level affinity maps that quantifies object boundary alignment without requiring object-level supervision.
- Biology Demonstrates that Gram matrix structure, capturing patch similarity patterns, is a key mechanism driving perceptual alignment between self-supervised models and human vision.
主要结论
- Self-supervised Transformer models trained with DINO objectives show strongest alignment with human behavior, with DINOv3 ViT-B achieving 91.9% grouping accuracy and highest noise-normalized Spearman correlation (Fig. 4A).
- Object-centric structure in patch representations, quantified by ROC AUC, strongly predicts behavioral alignment across models (correlation shown in Fig. 6B), with DINO-based models consistently outperforming supervised counterparts.
- Gram matrix distillation improves supervised models' alignment with human behavior, converging with independent evidence that Gram anchoring enhances DINOv3's feature quality.
摘要: Vision foundation models trained with self-supervised objectives achieve strong performance across diverse tasks and exhibit emergent object segmentation properties. However, their alignment with human object perception remains poorly understood. Here, we introduce a behavioral benchmark in which participants make same/different object judgments for dot pairs on naturalistic scenes, scaling up a classical psychophysics paradigm to over 1000 trials. We test a diverse set of vision models using a simple readout from their representations to predict subjects’ reaction times. We observe a steady improvement across model generations, with both architecture and training objective contributing to alignment, and transformer-based models trained with the DINO self-supervised objective showing the strongest performance. To investigate the source of this improvement, we propose a novel metric to quantify the object-centric component of representations by measuring patch similarity within and between objects. Across models, stronger object-centric structure predicts human segmentation behavior more accurately. We further show that matching the Gram matrix of supervised transformer models, capturing similarity structure across image patches, with that of a self-supervised model through distillation improves their alignment with human behavior, converging with the prior finding that Gram anchoring improves DINOv3’s feature quality. Together, these results demonstrate that self-supervised vision models capture object structure in a behaviorally human-like manner, and that Gram matrix structure plays a role in driving perceptual alignment.