Paper List
-
Macroscopic Dominance from Microscopic Extremes: Symmetry Breaking in Spatial Competition
This paper addresses the fundamental question of how microscopic stochastic advantages in spatial exploration translate into macroscopic resource domi...
-
Linear Readout of Neural Manifolds with Continuous Variables
This paper addresses the core challenge of quantifying how the geometric structure of high-dimensional neural population activity (neural manifolds) d...
-
Theory of Cell Body Lensing and Phototaxis Sign Reversal in “Eyeless” Mutants of Chlamydomonas
This paper solves the core puzzle of how eyeless mutants of Chlamydomonas exhibit reversed phototaxis by quantitatively modeling the competition betwe...
-
Cross-Species Transfer Learning for Electrophysiology-to-Transcriptomics Mapping in Cortical GABAergic Interneurons
This paper addresses the challenge of predicting transcriptomic identity from electrophysiological recordings in human cortical interneurons, where li...
-
Uncovering statistical structure in large-scale neural activity with Restricted Boltzmann Machines
This paper addresses the core challenge of modeling large-scale neural population activity (1500-2000 neurons) with interpretable higher-order interac...
-
Realizing Common Random Numbers: Event-Keyed Hashing for Causally Valid Stochastic Models
This paper addresses the critical problem that standard stateful PRNG implementations in agent-based models violate causal validity by making random d...
-
A Standardized Framework for Evaluating Gene Expression Generative Models
This paper addresses the critical lack of standardized evaluation protocols for single-cell gene expression generative models, where inconsistent metr...
-
Single Molecule Localization Microscopy Challenge: A Biologically Inspired Benchmark for Long-Sequence Modeling
This paper addresses the core challenge of evaluating state-space models on biologically realistic, sparse, and stochastic temporal processes, which a...
Neural network-based encoding in free-viewing fMRI with gaze-aware models
Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands | Martin Luther University Halle-Wittenberg, Medical Faculty, Halle, Germany | Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
30秒速读
IN SHORT: This paper addresses the core challenge of building computationally efficient and ecologically valid brain encoding models for naturalistic vision by integrating individual gaze patterns with CNN features, eliminating the need for restrictive fixation protocols.
核心创新
- Methodology Proposes gaze-aware encoding models that sample CNN features based on individual eye-tracking data, reducing model parameters by 112× while maintaining predictive performance.
- Methodology Introduces a hyperlayer feature map approach that combines features from multiple CNN layers into a unified representation with fixed spatial dimensions (7×16).
- Biology Demonstrates that gaze-aware models are particularly beneficial for participants with more dynamic eye-movement patterns, highlighting individual differences in visual processing.
主要结论
- Gaze-aware encoding models achieved comparable performance to conventional models while using only 1,472 features per TR (112× parameter reduction, p<0.05 after FDR correction).
- Models reduced working memory requirements from 15.6 GB to 419 MB (37× reduction), making them feasible on standard laptops rather than requiring HPC resources.
- Performance improvements were most pronounced in participants with dynamic eye-movement patterns, with significant correlations in visual areas V1-V3, lateral occipital, fusiform gyri, and superior temporal sulcus.
摘要: Representations learned by convolutional neural networks (CNNs) exhibit a remarkable resemblance to information processing patterns observed in the primate visual system on large neuroimaging datasets collected under diverse, naturalistic visual stimulation, but with instruction for participants to maintain central fixation. This viewing condition, however, diverges significantly from ecologically valid visual behaviour, suppresses activity in visually active regions, and imposes substantial cognitive load on the viewing task. We present a modification of the encoding model framework, adapting it for use with naturalistic vision datasets acquired under fully natural viewing conditions, without fixation, by incorporating eye-tracking data. Our gaze-aware encoding models were trained on the StudyForrest dataset, which features task-free naturalistic movie viewing. By combining eye-tracking data with the visual content of movie frames, we generate combined subject-wise gaze-stimulus specific feature time series. These time series are constructed by sampling only the locally and temporally relevant elements of the CNN feature map for each fixation. Our results demonstrate that gaze-aware encoding models match the performance of conventional encoding models with 112× fewer model parameters. Gaze-aware encoding models were especially beneficial for participants with more dynamic eye-movement patterns. Therefore, this approach opens the door to more ecologically valid models that can be built in more naturalistic settings, such as playing games or navigating virtual environments.