Paper List
-
Evolutionarily Stable Stackelberg Equilibrium
通过要求追随者策略对突变入侵具有鲁棒性,弥合了斯塔克尔伯格领导力模型与演化稳定性之间的鸿沟。
-
Recovering Sparse Neural Connectivity from Partial Measurements: A Covariance-Based Approach with Granger-Causality Refinement
通过跨多个实验会话累积协方差统计,实现从部分记录到完整神经连接性的重建。
-
Atomic Trajectory Modeling with State Space Models for Biomolecular Dynamics
ATMOS通过提供一个基于SSM的高效框架,用于生物分子的原子级轨迹生成,弥合了计算昂贵的MD模拟与时间受限的深度生成模型之间的差距。
-
Slow evolution towards generalism in a model of variable dietary range
通过证明是种群统计噪声(而非确定性动力学)驱动了模式形成和泛化食性的演化,解决了间接竞争下物种形成的悖论。
-
Grounded Multimodal Retrieval-Augmented Drafting of Radiology Impressions Using Case-Based Similarity Search
通过将印象草稿基于检索到的历史病例,并采用明确引用和基于置信度的拒绝机制,解决放射学报告生成中的幻觉问题。
-
Unified Policy–Value Decomposition for Rapid Adaptation
通过双线性分解在策略和价值函数之间共享低维目标嵌入,实现对新颖任务的零样本适应。
-
Mathematical Modeling of Cancer–Bacterial Therapy: Analysis and Numerical Simulation via Physics-Informed Neural Networks
提供了一个严格的、无网格的PINN框架,用于模拟和分析细菌癌症疗法中复杂的、空间异质的相互作用。
-
Sample-Efficient Adaptation of Drug-Response Models to Patient Tumors under Strong Biological Domain Shift
通过从无标记分子谱中学习可迁移表征,利用最少的临床数据实现患者药物反应的有效预测。
Human-like Object Grouping in Self-supervised Vision Transformers
Zuckerman Mind Brain Behavior Institute, Columbia University | Department of Social Science and AI, Hankuk University of Foreign Studies | Nanyang Technological University | University of Hong Kong | Stony Brook University
30秒速读
IN SHORT: This paper addresses the core challenge of quantifying how well self-supervised vision models capture human-like object grouping in natural scenes, bridging the gap between computational representations and behavioral psychophysics.
核心创新
- Methodology Introduces a large-scale behavioral benchmark (1,020 trials) scaling up classical psychophysics to natural images, enabling quantitative comparison between model representations and human object perception.
- Methodology Proposes a novel object-centric metric based on ROC analysis of patch-level affinity maps that quantifies object boundary alignment without requiring object-level supervision.
- Biology Demonstrates that Gram matrix structure, capturing patch similarity patterns, is a key mechanism driving perceptual alignment between self-supervised models and human vision.
主要结论
- Self-supervised Transformer models trained with DINO objectives show strongest alignment with human behavior, with DINOv3 ViT-B achieving 91.9% grouping accuracy and highest noise-normalized Spearman correlation (Fig. 4A).
- Object-centric structure in patch representations, quantified by ROC AUC, strongly predicts behavioral alignment across models (correlation shown in Fig. 6B), with DINO-based models consistently outperforming supervised counterparts.
- Gram matrix distillation improves supervised models' alignment with human behavior, converging with independent evidence that Gram anchoring enhances DINOv3's feature quality.
摘要: Vision foundation models trained with self-supervised objectives achieve strong performance across diverse tasks and exhibit emergent object segmentation properties. However, their alignment with human object perception remains poorly understood. Here, we introduce a behavioral benchmark in which participants make same/different object judgments for dot pairs on naturalistic scenes, scaling up a classical psychophysics paradigm to over 1000 trials. We test a diverse set of vision models using a simple readout from their representations to predict subjects’ reaction times. We observe a steady improvement across model generations, with both architecture and training objective contributing to alignment, and transformer-based models trained with the DINO self-supervised objective showing the strongest performance. To investigate the source of this improvement, we propose a novel metric to quantify the object-centric component of representations by measuring patch similarity within and between objects. Across models, stronger object-centric structure predicts human segmentation behavior more accurately. We further show that matching the Gram matrix of supervised transformer models, capturing similarity structure across image patches, with that of a self-supervised model through distillation improves their alignment with human behavior, converging with the prior finding that Gram anchoring improves DINOv3’s feature quality. Together, these results demonstrate that self-supervised vision models capture object structure in a behaviorally human-like manner, and that Gram matrix structure plays a role in driving perceptual alignment.