pith. machine review for the scientific record. sign in

q-bio.NC

Neurons and Cognition

Synapse, cortex, neuronal dynamics, neural network, sensorimotor control, behavior, attention

0
q-bio.NC 2026-05-14 Recognition

Mamba forecaster decodes mouse choice from spikes at 76 percent

Implicit Behavioral Decoding from Next-Step Spike Forecasts at Population Scale

Next-step predictions improve accuracy by 4-6 points over raw spike counts across 39 sessions and nearly 2,000 trials.

Figure from the paper full image
abstract click to expand
Closed-loop brain-computer interfaces often require both a forecast of upcoming neural population activity and a readout of the animal's behavioral state. A single Mamba forecaster, trained only on next-step spike counts at Neuropixels scale, can deliver both in one forward pass. A lightweight per-session linear head reading the model's predicted rates decodes behavior better than the same linear classifier reading the raw spike counts, under matched temporal context. We test on the Steinmetz visual-discrimination benchmark, which spans 39 sessions, roughly 27,000 neurons, and 1,994 held-out trials. Across three training seeds, Mamba's predicted rates decode mouse choice at 75.7$\pm$0.2% trial vote, roughly 2.3 times chance level, and stimulus side at 66.1$\pm$0.6%, about twice chance. Compared to a matched 500 ms-context linear decoder on the raw spike counts, Mamba wins at trial vote by 4-6 pp on response and 4-6 pp on stimulus side. A session-start calibration block of about 100-150 trials brings the readout within 1-2 pp of asymptote, and the full pipeline fits inside the 50 ms bin budget on workstation-class GPUs typical of tethered chronic Neuropixels recordings.
0
0
q-bio.NC 2026-05-14 2 theorems

Spike forecasting ranks brain regions consistently across models

SpikeProphecy: A Large-Scale Benchmark for Autoregressive Neural Population Forecasting

Metric split into timing, spatial patterns and scale shows regional differences survive firing-rate and variance corrections in 105 sessions

Figure from the paper full image
abstract click to expand
Neural population models, which predict the joint firing of many simultaneously recorded neurons forward in time, are typically evaluated by a single aggregate Pearson correlation $r$ between predicted and actual spike counts, a number that masks critical structure. We argue that how we evaluate spike forecasting matters as much as what we build, and introduce SpikeProphecy, the first large-scale benchmark for causal, autoregressive spike-count forecasting on real electrophysiology recordings. Our core contribution is a population metric decomposition that separates aggregate performance into temporal fidelity, spatial pattern accuracy, and magnitude-invariant alignment. The decomposition surfaces aspects of the underlying data that an aggregate scalar collapses together. We apply the protocol to 105 Neuropixels sessions (Steinmetz 2019 + IBL Repeated Site; ~89,800 neurons) with seven architecture baselines spanning four structural families: four SSMs (three diagonal and one non-diagonal), a Transformer, an LSTM, and a spiking network. The decomposition surfaces a brain-region predictability ranking that reproduces across all seven baselines and survives ANCOVA correction for firing-statistics constraints (region $\Delta R^2 = 0.018$ above the firing-statistics covariates). It also exposes a sub-Poisson evaluation floor where rigorous metrics combine with genuine biophysical constraints on regular spike trains, and yields a negative result on KL-on-output-rates distillation for ANN-to-SNN transfer in this Poisson count domain.
0
0
q-bio.NC 2026-05-13 Recognition

Language descriptions capture monkey neuron selectivity

Letting the neural code speak: Automated characterization of monkey visual neurons through human language

Digital twins convert V1 and V4 responses into semantic hypotheses that drive extreme activity in most cells

Figure from the paper full image
abstract click to expand
Understanding what individual neurons encode is a core question in neuroscience. In primary visual cortex (V1), mathematical models (e.g., Gabor functions) capture neural selectivity, but no comparable framework exists for higher areas. We show that natural language can fill this role: across macaque V1 and V4, the selectivity of most neurons is captured by concise, verifiable semantic descriptions. Using digital twins of V1 and V4, we develop a closed-loop framework that translates each neuron's high- and low-activating images into dense captions, generates a semantic hypothesis and synthesized images, and verifies the hypothesis in silico. Descriptions range from oriented edges and spatial frequency in V1 to conjunctions of form, color, and texture in V4. In V4, images generated from activating and suppressing hypotheses drove 96.1% of neurons above the 95th and 97.6% below the 5th percentile of natural-image responses, respectively (vs. ~10\% for random images); V1 activation results matched V4, while V1 suppression was less describable in language. Representational similarity analysis reveals partial alignment between neural activity, vision embeddings, and language embeddings, with vision most aligned to neural activity; alignment lost in the text bottleneck is recovered when hypotheses are rendered back into images, showing that linguistic compression is lossy yet semantically faithful. Together, these results show that combining generative models with neural digital twins enables interpretable, testable descriptions of neural function at scale, toward agentic scientific discovery.
0
0
q-bio.NC 2026-05-13 1 theorem

Conductance synapses and correlations yield realistic variability

Empirical scaling laws in balanced networks with conductance-based synapses

Alone each produces unrealistic extremes in balanced recurrent networks, but together they cancel to moderate levels matching cortex.

Figure from the paper full image
abstract click to expand
Strongly coupled, recurrent, balanced network models have been successful in describing and predicting many phenomena observed in cortical neural recordings. However, most balanced network models use current-based synapse models in place of more realistic, conductance-based models. Conductance-based synapse models predict unrealistically small membrane potential variability. On the other hand, introducing realistic levels of spike time correlations to models with current-based synapses predicts unrealistically large membrane potential variability. We use computer simulations to show that these two effects can cancel: Recurrent network models with conductance-based synapses and spike time correlations produce more realistic, moderate levels of membrane potential variability. Consistent with recent work on feedforward networks, our results show that including more realistic modeling assumptions produces more realistic dynamics, but only if when two modeling assumptions are included together.
0
0
q-bio.NC 2026-05-13 2 theorems

Contrastive video training yields MT-like direction maps

Self-organized MT Direction Maps Emerge from Spatiotemporal Contrastive Optimization

A 3D ResNet develops pinwheels and selectivity matching macaque MT through the trade-off between contrastive and spatial objectives.

Figure from the paper full image
abstract click to expand
The spatial and functional organization of the primate visual cortex is a fundamental problem in neuroscience. While recent computational frameworks like the Topographic Deep Artificial Neural Network (TDANN) have successfully modeled spatial organization in the ventral stream, the computational origins of the dorsal stream's distinct topographies, such as direction-selective maps in the middle temporal (MT) area, remain largely unresolved. In this work, we present a spatiotemporal TDANN to investigate whether MT topography is governed by the same universal principles. By training a 3D ResNet on naturalistic videos via a Momentum Contrast (MoCo) self-supervised paradigm alongside a biologically inspired spatial loss, we demonstrate the spontaneous emergence of brain-like direction maps and topological pinwheel structures. Crucially, we reveal that MT tuning properties, characterized by strong direction selectivity paired with a residual axial component, arise from a strict optimization trade-off between task-driven discriminative pressure and spatial regularization. The model's representations quantitatively match in vivo macaque MT physiological baselines, including direction selectivity index, circular variance, and pinwheel density. These findings unify the computational origins of the ventral and dorsal streams, establishing a general mechanism for cortical self-organization.
0
0
q-bio.NC 2026-05-12 Recognition

Cerebellar module speeds RNN learning on temporal tasks

Cortico-cerebellar modularity as an architectural inductive bias for efficient temporal learning

The feedforward cerebellar component drives faster training and higher performance even when the recurrent cortical core is frozen after a 1

Figure from the paper full image
abstract click to expand
The cerebellum and cerebral cortex form tightly coupled circuits thought to support flexible and efficient temporal processing. How this interaction shapes cortical learning dynamics, and whether such heterogeneous modularity can benefit artificial systems, remains unclear. Here, we augment a recurrent neural network (RNN) with a cerebellar-inspired feedforward module and evaluate the resulting architecture on temporal tasks of varying difficulty. The cortico-cerebellar RNN (CB-RNN) learns faster and reaches higher maximum performance than parameter-matched fully recurrent baselines across a variety of regimes. Crucially, freezing the recurrent core after minimal training and delegating subsequent learning to the cerebellar module preserves superior learning efficiency, suggesting the cerebellar module is a primary driver of efficiency and that the cortical network can largely function as a fixed reservoir. Our results suggest that heterogeneous modular architectures can act as a powerful structural inductive bias in neural systems.
0
0
q-bio.NC 2026-05-12 2 theorems

Sparse coding and timing preserve knowledge across context switches

Joint sparse coding and temporal dynamics support context reconfiguration

Brain and model networks use these traits to reconfigure without erasing prior experience.

Figure from the paper full image
abstract click to expand
Adaptive behavior requires the brain to transition between distinct contexts while maintaining representations of prior experience. The ability to reconfigure neural representations without erasing previously acquired knowledge is central to learning in dynamic environments, yet the neural mechanisms that support this balance remain unclear. Understanding these mechanisms is also critical for addressing catastrophic forgetting in artificial systems designed for lifelong learning. Here, we identify joint sparse coding and temporal dynamics in both the mouse medial prefrontal cortex (mPFC) and computational networks as mechanisms that help preserve prior representations during context transitions. Specifically, sparsity in context-dependent representations reduces cross-context interference, whereas temporal dynamics within the network activity further enhance context separability across time. Strikingly, networks endowed with both properties, such as spiking neural networks, exhibit improved retention during lifelong learning without auxiliary heuristics. These findings establish joint sparse coding and temporal dynamics as a core mechanism supporting flexible context reconfiguration in lifelong learning and, through their activity constraining nature, as an energy-efficient architectural principle for stable adaptation. Together, they provide a mechanistic framework for understanding how the brain preserves prior knowledge while flexibly adapting to new contexts.
0
0
q-bio.NC 2026-05-11 Recognition

Prediction signals match group brain patterns in language learning

Predictive and feedback signals differently shape the formation of group-level and individualized language representations

Feedback signals instead track which individuals will generalize the new language best after a week of training.

Figure from the paper full image
abstract click to expand
Adults vary greatly in how effectively they learn a new language, but the signals driving the learning processes and individual differences remain unclear. Over seven days, we tracked behavioral learning and collected fMRI data from 102 adults as they learned an artificial language with corrective feedback. We trained matched transformer models with prediction, feedback, or combined objectives and compared their internal representations to brain activity. Representations derived from the prediction-focused model accounted for the largest share of unique neural variance at the group level, despite the human task being feedback-based. Throughout model training, both objectives showed a shift in brain-model alignment from sensory to higher-order language and associative networks, indicating abstraction processing. Conversely, neural patterns related to the feedback model were most useful for predicting individual generalization outcomes on Day 7. These findings support a multi-signal model of adult language learning, in which prediction shapes a common neural learning architecture across learners, whereas feedback-related mechanisms better explain individual differences over time.
0
0
q-bio.NC 2026-05-11 Recognition

Internal errors trigger sparse neural net updates

Internally triggered retrospective learning in neural networks

Discrepancy thresholds replace continuous weight changes, focusing adaptation on informative sequential patterns.

abstract click to expand
Learning in artificial neural networks usually relies on continuous, externally driven weight updates, in which parameters are modified at every step in response to incoming data, error signals or reward feedback. In this setting, routine and informative inputs contribute similarly to parameter adjustment. We introduce a learning approach in which parameter updates are governed by internally generated events arising from the network own representational dynamics. During ongoing activity, synaptic interactions are accumulated as latent traces encoding recent coactivation patterns, without immediately modifying the underlying parameters. In parallel, an internal predictive process estimates the evolving latent state, while a scalar measure of discrepancy between predicted and observed states is continuously computed. When discrepancy exceeds an adaptive threshold derived from recent error statistics, a learning event is triggered, inducing a retrospective update selectively integrating past activity into the current configuration. We performed simulations using a minimal neural network exposed to structured sequential inputs with transient perturbations. We found that learning occurs through sparse, temporally localized events associated with increases in prediction error, leading to stepwise changes in synaptic efficacy and discrete transitions in latent state organization. By selectively reorganizing parameters in response to internally detected discrepancies, our episodic updating may reduce unnecessary parameter drift while preserving informative patterns. Potential applications include systems requiring selective adaptation to rare or informative inputs such as physiological, industrial or environmental monitoring, edge computing under limited energy budgets, autonomous systems operating in dynamic conditions and sequential computational data processing.
0
0
q-bio.NC 2026-05-11 Recognition

Multi-timescale inhibition widens theta oscillator locking range

Dynamical mechanisms of flexible phase-locking in cortical theta oscillators

Slow and superslow currents create recovery delays that let oscillators entrain to inputs slower than their natural frequency.

Figure from the paper full image
abstract click to expand
Oscillatory activity in auditory cortex is thought to play a central role in auditory and speech processing by synchronizing neural rhythms to external acoustic features of the speech stream. To support this function, cortical oscillators must flexibly phase-lock to inputs spanning a wide range of timescales, including rhythms substantially slower than their intrinsic frequency. Here we identify a general dynamical mechanism by which intrinsic inhibitory currents operating on multiple timescales enable such flexible phase-locking. Using tools from dynamical systems theory, we show that interactions between slow and superslow inhibitory processes generate prolonged post-input recovery delays through delayed Hopf phenomena, thereby substantially expanding the frequency range over which entrainment can occur. We demonstrate this mechanisms in a biophysically grounded cortical theta oscillator model for speech segmentation. Specifically, we show that both a theta-timescale (4-8 Hz) inhibitory current $I_m$ and a slower delta-timescale (1-4 Hz) inhibitory potassium current $I_{\rm K_{SS}}$ are crucial for entrainment flexibility. Their interaction creates a three-timescale structure that gives rise to pronounced delay phenomena associated with a delayed Hopf bifurcation (DHB). Interestingly, the superslow $I_{\rm K_{SS}}$ and the associated DHB play little role in the unforced oscillatory dynamics, but are recruited to support phase locking under external forcing. Moreover, the intermediate-timescale current $I_m$, rather than being redundant, further expands the phase-locking range by prolonging delayed recovery along the superslow manifold. Together, these results suggest that coordination among intrinsic inhibitory currents operating on multiple timescales may represent a key mechanism supporting flexible phase locking to rhythmic inputs in the brain.
0
0
q-bio.NC 2026-05-08 2 theorems

Disentangled multi-atlas model aligns brain disorder representations

Learning Cross-Atlas Consistent Brain Disorder Representations via Disentangled Multi-Atlas Functional Connectivity Learning

MADCLE enforces cross-atlas consistency on disease signals from fMRI while isolating covariates and atlas effects.

Figure from the paper full image
abstract click to expand
Functional connectivity (FC) derived from resting-state fMRI is widely used to characterize large-scale brain network alterations in neurological and psychiatric disorders. However, FC construction critically depends on the choice of brain atlas, and different parcellations may emphasize distinct organizational features, leading to heterogeneous and sometimes inconsistent representations. Existing multi-atlas approaches partially alleviate this issue but often fuse atlas-derived features or predictions at a relatively shallow level, while single-atlas disentanglement methods do not explicitly address cross-atlas heterogeneity. We propose Multi-Atlas Disentangled Connectivity LEarning (MADCLE), a multi-branch representation learning framework that jointly encodes FC matrices derived from different brain atlases. Rather than introducing a single explicitly shared latent variable across parcellations, MADCLE learns atlas-wise disease-related representations and encourages them to be cross-atlas consistent through distributional alignment. Meanwhile, covariate-related and atlas-dependent residual factors are modeled separately using covariate similarity supervision, atlas-specific reconstruction, and decorrelation constraints, thereby reducing the leakage of non-disease and parcellation-dependent information into the disease-related embeddings. Experiments on the ADNI and ADHD-200 datasets suggest that MADCLE achieves competitive or improved performance compared with single-atlas baselines, multi-atlas GNN/Transformer models, and recent multi-atlas consistency frameworks. These results support the potential value of structured disentanglement for FC-based disorder identification under heterogeneous parcellation schemes.
0
0
q-bio.NC 2026-05-08

Brains and DNNs align on preserved stimulus transformations

Beyond Object-Level Alignment: Do Brains and DNNs Preserve the Same Transformations?

Semantic axes match higher visual cortex to deeper layers; basic visual axes match earlier cortex to shallower layers.

Figure from the paper full image
abstract click to expand
Brain-DNN alignment is usually assessed through stimulus-level correspondence or stimulus-set geometry. Inspired by category theory, we operationalize a different question: do brain and model preserve the same candidate transformations among stimuli? We formalize this as approximate naturality: if a proxy-defined stimulus change is propagated through the brain side and then translated to the model side, the result should match translating first and then propagating, so that the naturality square approximately commutes. We quantify deviations from commutativity by a Naturality Violation Score (NVS) normalized to a permutation null, shifting alignment from per-stimulus sameness to preservation of structure under an explicitly chosen comparison map. As a proof of concept, a controlled five-factor synthetic setting shows that NVS separates complementary alignment failures that aggregate object- and geometry-level scalars cannot resolve. Applied to fMRI responses from the GOD dataset (5 subjects), 3 vision DNNs, and 3 World-Model proxy embeddings, the axis-resolved analysis reveals a hierarchy crossover: semantic axes align most strongly toward HVC and deeper DNN layers (NVS^animacy = 0.39 vs 0.52 for the next-best axis and 1.0 for the permutation-null baseline), whereas low- and mid-level visual axes align toward earlier visual cortex and shallower layers. Supporting analyses (a 15-axis appendix atlas, dissociation tests against RSA/CKA and encoding/decoding accuracy, and a W-less anchor-ablation control) confirm that the alignment is selective over candidate morphism families rather than uniform. NVS thereby turns brain-DNN comparison into a test of jointly preserved candidate transformations, relative to an explicit proxy space and permutation null, and opens a path to richer proxy spaces and controlled world-side transformations.
0
0
q-bio.NC 2026-05-08

Neural geometry stretches informative directions and shrinks the rest

A multi-scale information geometry reveals the structure of mutual information in neural populations

Coarse-graining first principles produce a metric exactly proportional to mutual information, expanding well-encoded stimulus features.

Figure from the paper full image
abstract click to expand
Understanding how neural population responses represent sensory information is a central problem in systems neuroscience. One approach is to define a representational geometry on stimulus space in which distances reflect how reliably stimuli can be distinguished from neural activity. However, different constructions of these distances can lead to qualitatively different conclusions about the neural code. Here, we show that a unique Riemannian representational geometry emerges from first principles governing how distances contract as stimulus resolution is lost through coarse-graining. This results in a multi-scale extension of the Fisher information metric, capturing encoding structure from fine stimulus details to coarse global distinctions. The resulting geometry is exactly related to the mutual information encoded by the population: well encoded stimulus directions - those contributing more to mutual information - are expanded, whereas poorly encoded directions are contracted. The metric tensor can be estimated using diffusion models, making the framework practical for large neural populations and high-dimensional stimuli. Applied to visual cortical responses to natural images, the eigenvectors of the metric tensor identify stimulus variations that contribute most to information transmission, yielding interpretable features that are robust to modelling choices. Together, these results provide a principled, information-theoretic framework for characterising neural population codes.
0
0
q-bio.NC 2026-05-08

Decoding alignment can arise from small neuron subsets

Decoding Alignment without Encoding Alignment: A critique of similarity analysis in neuroscience

Classic metrics like RSA stay unchanged even when encoding organization is altered, showing they miss differences in neural computation.

Figure from the paper full image
abstract click to expand
Decoding approaches are widely used in neuroscience and machine learning to compare stimulus representations across neural systems, such as different brain regions, organisms, and deep learning models. Popular methods include decoding (perceptual) manifolds and alignment metrics such as Representational Similarity Analysis (RSA) and Dynamic Similarity Analysis (DSA), where similarity in decoding representations is interpreted as evidence for similar computation. This paper demonstrates a fundamental weakness behind this approach: it is misleading to assume that representational geometry is representative of a neuronal population as a whole, when such representations may actually be shaped by a very small subset of neurons. We show that the complementary encoding paradigm addresses this issue directly: it characterizes how neurons are organized globally in terms of their responses to a set of data, providing insight into how the decoding representation is implemented by neurons within a population. We demonstrate across experiments in biological systems and deep learning models that (i) surprisingly, similar decoding behavior and high representational alignment can arise from small, non-representative subpopulations of neurons; and critically, (ii) alignment metrics are insensitive to encoding manifold topology (how function is distributed across neurons), despite this being a key signature of differentiation across biological systems. A controlled MNIST experiment provides causal evidence: decoding metrics remain unchanged even when encoding topology is causally manipulated via the training loss. Overall, similarity in decoding behavior, as measured by classic alignment metrics, does not imply similarity in function or computation, motivating the use of encoding manifolds as a complementary tool for comparing neural systems.
0
0
q-bio.NC 2026-05-07

Think-aloud data reveals different decision models than behavior alone

Think-Aloud Reshapes Automated Cognitive Model Discovery Beyond Behavior

Verbal reports of internal processes improve prediction and shift models from explicit comparison to integrated utility for most people.

Figure from the paper full image
abstract click to expand
Computational cognitive models discovered using large language models have so far relied solely on behavioral data. However, it is well-known that models produced from the behavioral trajectory alone are typically under-determined. In this work, we explore the use of Think Aloud traces as an additional form of data constraint during automated model discovery. When applied to the domain of risky decision-making, we find that the models discovered with think-aloud achieve significantly improved predictive performance on held-out data. Additionally, we find that the discovered models belong to different structural classes than those discovered from behavior alone for the majority of participants (69.4\%), specifically, it shifts from Explicit comparator towards Integrated utility. These results suggest that process-level language data not only improve model fit, but also systematically reshape the structure of the discovered cognitive models, enabling the identification of mechanisms that are not recoverable from behavior alone.
0
0
q-bio.NC 2026-05-07

Antisymmetric indices detect genuine high-order brain frequency couplings

A Generalized Framework of Antisymmetric Polyspectral Indices for Identifying High-Order Neural Interactions

The measures quantify multi-frequency harmonic dependencies while their antisymmetry cancels volume conduction artifacts that confound other

Figure from the paper full image
abstract click to expand
Cross-frequency interactions are fundamental brain mechanisms for integrating information across temporal scales. However, accurate identification of these couplings is hindered by complex multi-frequency nonlinearities and by spurious, zero-lag artifacts caused by volume conduction. To our knowledge, conventional metrics lack a robust framework to characterize genuine interactions among multiple time series where a frequency of interest $f_N$ arises from the combination of $N-1$ components such that $f_N = \sum_{i=1}^{N-1} f_i$. We introduce a general family of antisymmetric cross-polyspectral indices designed to quantify these harmonic dependencies while being intrinsically robust to instantaneous mixing. We derive the theoretical properties of these quantities and validate them through simulations of cubic nonlinearities. As a proof of concept, we apply the indices to empirical EEG recordings; the results reveal significant higher-order dependencies that elude standard analytical approaches. We further discuss how these indices can inform novel, personalized multi-site transcranial magnetic stimulation (mTMS) protocols by enabling the selective monitoring and modulation of specific multi-frequency network interactions.
0
0
q-bio.NC 2026-05-07

The study tests whether biasing deep neural networks toward low spatial frequencies or a…

Dissociating spatial frequency reliance from adversarial robustness advantages in neurally guided deep convolutional neural networks

Direct biasing to low or human mid-frequencies yields only modest or negative effects unlike brain-matched models.

Figure from the paper full image
abstract click to expand
Deep convolutional neural networks (DCNNs) have rivaled humans on many visual tasks, yet they remain vulnerable to near-imperceptible perturbations generated by adversarial attacks. Recent work shows that aligning DCNN representations with human visual cortex activity improves adversarial robustness, but the mechanisms driving this advantage are unclear. One hypothesis suggests that neural alignment confers robustness by biasing models away from brittle high-frequency details and towards the low spatial frequencies (LSF). However, recent work shows that human object recognition critically depends on a narrow, mid-frequency "human channel". Interestingly, this band was partially preserved in prior LSF-focused studies. Here, we investigate whether a spectral bias towards the LSF or the human channel is the primary driver of the adversarial robustness observed in neurally aligned DCNNs. We first show that DCNNs aligned to higher-order regions of the human ventral visual stream systematically increase reliance on both LSF and the human channel. However, directly steering DCNNs towards these bands revealed a clear dissociation. Biasing models towards the human channel, either alone or together with LSF, does not improve robustness and even impairs it. LSF bias produced some robustness gains, but such improvements are modest despite inducing much larger shifts in spatial-frequency reliance than neurally aligned models. Spatial-frequency-biased models overall show little, if any, increase in similarity to human neural representational geometry. Together, our results suggest that altered spatial-frequency reliance is likely an emergent property of learning more human-like representations rather than the primary mechanism by which neural alignment confers adversarial robustness, and motivate the need for future research examining representational properties beyond spatial-frequency profiles.
0
0
q-bio.NC 2026-05-06

One model predicts brain responses for new stimuli and recovers known results

A foundation model of vision, audition, and language for in-silico neuroscience

It beats linear encoding models several-fold on fMRI data and matches decades of lab findings through virtual experiments on vision and word

abstract click to expand
Cognitive neuroscience is fragmented into specialized models, each tailored to specific experimental paradigms, hence preventing a unified model of cognition in the human brain. Here, we introduce TRIBE v2, a tri-modal (video, audio and language) foundation model capable of predicting human brain activity in a variety of naturalistic and experimental conditions. Leveraging a unified dataset of over 1,000 hours of fMRI across 720 subjects, we demonstrate that our model accurately predicts high-resolution brain responses for novel stimuli, tasks and subjects, superseding traditional linear encoding models, delivering several-fold improvements in accuracy. Critically, TRIBE v2 enables in silico experimentation: tested on seminal visual and neuro-linguistic paradigms, it recovers a variety of results established by decades of empirical research. Finally, by extracting interpretable latent features, TRIBE v2 reveals the fine-grained topography of multisensory integration. These results establish artificial intelligence as a unifying framework for exploring the functional organization of the human brain.
0
0
q-bio.NC 2026-05-06

Neural manifolds crystallize from synchronization and Hebbian plasticity

Neural Manifolds as Crystallized Embeddings: A Synthesis of the Free Energy Principle, Generalized Synchronization, and Hebbian Plasticity

Free energy geometry arises bottom-up in recurrent circuits driven by the world, then fixed by plasticity into autonomous attractors.

abstract click to expand
The free energy principle casts perception as variational inference, but its biological implementation remains underspecified. In particular, the generalized-coordinate formalism should not be read as a literal claim that neurons compute arbitrary Taylor expansions. This paper argues that generalized synchronization provides the missing bottom-up mechanism. A contractive recurrent circuit driven by structured sensory input can synchronize to the driving dynamics. Under generic embedding conditions developed in the reservoir-computing literature, the resulting synchronization map can embed the low-dimensional sensory manifold into neural state space. Thus, the geometry predicted by the free energy principle need not be imposed from above by an explicitly Bayesian neural calculus; it can arise from ordinary recurrent dynamics driven by the world. I then propose a developmental extension. Hebbian plasticity acting on the correlations generated by sensory-driven synchronization may crystallize the embedded manifold into recurrent connectivity, yielding an autonomous continuous attractor network when the required fixed point exists. On this view, mature head-direction, grid-cell, and stimulus-driven visual manifolds are not genetically prespecified templates, but developmental products of three interacting processes: dynamical contraction, generalized synchronization, and correlation-based plasticity. The synthesis links the free energy principle, reservoir-computing embedding theorems, and contraction-theoretic models of Hebbian recurrent networks. It also yields testable predictions about dimensional thresholds for topological recovery, developmental sensitivity to plasticity, and the dependence of attractor geometry on input statistics. The central open problem is whether the Hebbian fixed point exists and preserves the embedding quality of the synchronization manifold.
0
0
q-bio.NC 2026-05-05

NeuralSet gives one Python interface for brain data and AI models

NeuralSet: A High-Performing Python Package for Neuro-AI

Lazy extraction of metadata and recordings lets the same code move from laptop tests to cluster runs without manual wrangling.

abstract click to expand
Artificial intelligence (AI) is increasingly central to understanding how the brain processes information. However, the integration of neuroscience and modern AI is bottlenecked by a fragmented software ecosystem. Current tools are siloed by recording modality and optimized for small-scale, in-memory workflows, limiting the use of massive, naturalistic datasets. Here, we introduce NeuralSet, a Python framework that efficiently unifies the processing of diverse neural recordings (including fMRI, M/EEG, and spikes) and complex experimental stimuli (such as text, audio, and video). By decoupling experimental metadata from lazy, memory-efficient data extraction, NeuralSet harmonizes standard neuroscientific preprocessing pipelines with pretrained deep learning embeddings. This approach provides a single PyTorch-ready interface that scales seamlessly from local prototyping to high-performance cluster execution. By eliminating manual data wrangling and ensuring full computational provenance, NeuralSet establishes a scalable, unified infrastructure for the next generation of neuro-AI research.
0
0
q-bio.NC 2026-05-05 2 theorems

NeuralSet unifies neural data and stimuli in one scalable PyTorch interface

NeuralSet: A High-Performing Python Package for Neuro-AI

Decoupling metadata from lazy extraction lets the package handle fMRI through video while scaling to clusters and tracking provenance.

abstract click to expand
Artificial intelligence (AI) is increasingly central to understanding how the brain processes information. However, the integration of neuroscience and modern AI is bottlenecked by a fragmented software ecosystem. Current tools are siloed by recording modality and optimized for small-scale, in-memory workflows, limiting the use of massive, naturalistic datasets. Here, we introduce NeuralSet, a Python framework that efficiently unifies the processing of diverse neural recordings (including fMRI, M/EEG, and spikes) and complex experimental stimuli (such as text, audio, and video). By decoupling experimental metadata from lazy, memory-efficient data extraction, NeuralSet harmonizes standard neuroscientific preprocessing pipelines with pretrained deep learning embeddings. This approach provides a single PyTorch-ready interface that scales seamlessly from local prototyping to high-performance cluster execution. By eliminating manual data wrangling and ensuring full computational provenance, NeuralSet establishes a scalable, unified infrastructure for the next generation of neuro-AI research.
0
0
q-bio.NC 2026-05-05

Score products recover directed neural Jacobians from snapshots

Inferring Active Neural Circuits Using Diffusion Scores

The approach infers lag-specific circuit interactions in high-dimensional recordings without assuming dynamics.

Figure from the paper full image
abstract click to expand
In biological systems, neural circuits compute through directed, short-latency interactions whose effects unfold across multiple time scales and behavioral contexts. We address the problem of inferring these local, lag-specific interactions from sampled neural population activity under varying stimuli, without assuming a parametric form for the underlying dynamics. Our approach leverages denoising score models by estimating joint-window scores over consecutive activity snapshots (i.e., brain states) and converting these scores into calibrated, directed edge tests via cross-block score products. The key insight is that these products recover the Jacobian of the transition map between brain states under nonlinear dynamics. To cleanly separate lag-specific effects, we introduce minimal multi-block windows that condition on intermediate time points, avoiding the omitted-lag bias inherent in pairwise analyses. The resulting method, Score--Block Time Graphs (SBTG), identifies lag-specific directed interactions in sampled neuronal population data. We specifically apply SBTG to whole-brain C. elegans calcium imaging data to recover lag-specific circuit structure not resolved by current methods, including improved alignment with independent connectomes, cell-type-specific temporal organization, and neuromodulatory profiles consistent with known receptor kinetics. These findings highlight the potential for SBTG to serve as a practical ``AI for science'' tool by turning high-dimensional neural population recordings into statistically testable circuit hypotheses.
0
0
q-bio.NC 2026-05-04

EEG and EMG track neural regeneration after injury

Electroencephalography and Electromyography as a Non-Invasive Biomarker of Neural Regeneration: A Review of Central and Peripheral Nervous System Injury and Regeneration

Brain wave patterns and muscle signals reveal recovery in central and peripheral nervous systems by capturing plasticity and reinnervation.

abstract click to expand
Regeneration of the nervous system after injury remains an important therapeutic objective, especially in the central nervous system (CNS), in which regeneration is restricted by both neuronal limitations as well as adverse extracellular environments. Conversely, the peripheral nervous system (PNS) displays enhanced regenerative capability in the presence of supportive Schwann cells (SC) and pro-growth stimuli. While the structure and molecular mechanisms are thoroughly understood, functional biomarkers that can non-invasively monitor regeneration in real time are limited. In this review, we discuss the promise of electroencephalography (EEG) as well as electromyography (EMG) as real-time, non-invasive biomarkers to monitor damage to nerves and regeneration in both CNS and PNS contexts. First, we contrast biological and electrophysiological indicators of CNS/PNS injury, showing how EEG signs, including oscillatory power, connectivity, and evoked potential changes, reflect dysfunction due to injury as well as neuroplastic reorganization. Also, EMG provides direct insight into muscle activation and peripheral output, providing useful EEG complementation in neuromuscular pathway integrity and reactivation. In CNS injuries (e.g., stroke, spinal cord injury (SCI)), EEG typically shows global slowing, disrupted interhemispheric coherence, and partial recovery of higher frequencies. For PNS injuries, EEG can capture cortical remapping and return of somatosensory evoked responses with re-establishment of the peripheries' connectivity. EMG, in turn, enables monitoring of reinnervation and restoration of functional motor output. This review presents a dual-system perspective, positioning EEG and EMG not only as diagnostic tools but also as functional biomarkers of neural regeneration, thereby bridging electrophysiology, plasticity, and clinical recovery.
0
0
q-bio.NC 2026-05-04

Spiking network coordinates neurons via brain-like rhythms and delays

From Cortical Synchronous Rhythm to Brain Inspired Learning Mechanism: An Oscillatory Spiking Neural Network with Time-Delayed Coordination

Bottom-up spike accumulation and top-down time-delayed modulation produce transient synchrony for decoding, binding, and reasoning tasks.

Figure from the paper full image
abstract click to expand
Human cognition emerges from coordinated spiking dynamics in distributed neural circuits, where information is encoded via both firing rates and precise spike timing determined by brain rhythms. Inspired by this notion, we propose a brain-inspired learning primitive in which cognition-level neural synchrony emerges through iterative bottom-up and top-down interactions between micro-scale dynamics of spiking neurons and a macro-scale mechanism of oscillatory synchronization. Specifically, we model each parcel (e.g., a cortical region or an image pixel) in the target system as a spiking neuron embedded in a predefined connectivity scaffold. Low-level information is encoded in a spatiotemporal domain, where neurons are selectively grouped and fire spontaneously over time through self-organized dynamics. In the bottom-up route, oscillatory synchronization is formed from past spiking activity accumulated over a finite memory window. Since brain dynamics operate in a regime of partial and transient synchronization rather than global phase locking, we model oscillatory coordination using a time-delayed synchronization formulation, which enables a top-down modulation of heterogeneous neural spiking for a large-scale distributed system. Together, we devise a spiking-by-synchronization neural network (S2-Net) that uses rhythmic timing as a control mechanism for efficient information processing. Promising results have been achieved across a broad range of tasks, including neural activity decoding, energy-efficient signal processing, temporal binding and semantic reasoning.
0
0
q-bio.NC 2026-05-04

Automata structures make understanding measurable

Measuring Understanding Through Discrete Compositional Knowledge Structures in Hierarchical Automata

Hierarchical models from finite state machines and compositions generate five inspectable signatures that track formation and distinguish it

abstract click to expand
How do we measure genuine understanding in artificial cognitive systems? Current approaches face a measurement gap: probabilistic systems refine confidence gradually, practice-based systems compile knowledge through repeated execution, and neural systems distribute understanding across opaque embedding spaces. We propose that making understanding measurable requires architectures where understanding formation produces discrete, inspectable structural signatures. This paper presents hierarchical automata built from finite state machines representing patterns and higher-order automata representing compositions. Constrained inference constructs automata from single observations. Similarity detection clusters related automata, making concept robustness quantifiable. Graph memory makes compositional knowledge directly inspectable. Metacognitive mechanisms enable observable reconfiguration. We demonstrate understanding measurement in a simple geometric domain. Graph evolution tracking reveals five measurable signatures: immediate representation formation, structural knowledge, generalization capacity, compositional awareness, and metacognitive access. These measurements distinguish structural understanding from statistical correlation. Our contribution is a framework for making understanding measurable through discrete compositional knowledge structures. This measurement capability complements perceptual learning in neural systems and task execution in neurosymbolic architectures.
0
0
q-bio.NC 2026-05-04

Connectivity pruning cuts BCI bands by up to 78%

Functional Connectivity-Guided Band Selection for Motor Imagery Brain-Computer Interfaces

Phase measures on four channels pick the frequencies that keep motor imagery accuracy near baseline and reduce CSP computations by 22-78%.

Figure from the paper full image
abstract click to expand
Reliable control in motor imagery brain-computer interfaces (MI-BCIs) requires the precise decoding of user-specific neural rhythms, which vary significantly across individuals. The Common Spatial Pattern (CSP) algorithm is a cornerstone of MI-BCI decoding, yet its performance depends strongly on the spectral range of the input EEG data. Although Filter Bank CSP (FBCSP) extends this as a data-driven decoding framework, its frequency sub-bands are predefined rather than selected using subject-specific physiological criteria. This paper presents a proof-of-concept study of static functional connectivity (FC)-guided band selection for MI-BCI, demonstrated using a conventional FBCSP-based pipeline. The proposed method identifies the most discriminative spectral bands by calculating phase-based connectivity across four sensorimotor channels using wPLI, PLV, and PLI. Nine bands in a 4-40 Hz filter bank are ranked by the effect size of their hemispheric coupling differences and pruned to the top K bands for feature extraction and classification via FBCSP and a Support Vector Regressor. This framework was tested for K values ranging from 1 to 8 across the BCI Competition IV-2a (n = 9) and OpenBMI (n = 54) datasets. Performance was benchmarked against standard nine-band FBCSP and random ablation to determine the minimum number of bands (K*) required to maintain accuracy within a 2% baseline equivalence zone. Results show FC-guided selection can outperform random ablation and achieve near-baseline performance while reducing required CSP fits by 22.2% to 77.8%. PLV enables the most aggressive dimensionality reduction by prioritizing the {\mu} and low-\b{eta} ranges, while wPLI demonstrates superior inter-session robustness by mitigating volume conduction. These findings establish FC-guided selection as a principled and interpretable alternative to heuristic filter bank designs.
0
0
q-bio.NC 2026-05-04

Resting brain networks differentiate anxiety's behavioral

Intrinsic Brain Networks Underlying the Experience and Expression of Subclinical Anxiety

In 47 adults, specific resting-state connections tied each facet of subclinical anxiety to distinct regions, extending task findings to rest

Figure from the paper full image
abstract click to expand
Anxiety includes behavioural, physiological, and subjective components that do not always align, and it remains unclear whether these dimensions are supported by distinct intrinsic brain networks. Guided by the two-system framework, we tested whether resting-state functional connectivity (rsFC) differentiates these components in subclinical anxiety. Forty-seven young adults spanning a range of subclinical anxiety levels completed a threat anticipation task measuring behavioral responses (reaction time) and physiological arousal (skin conductance), along with the NIH Fear-Affect self-report of anxiety severity. These measures were related to rsFC using region-of-interest analyses. Higher subclinical anxiety was associated with faster responses under temporally uncertain threat, consistent with increased vigilance, while no association was found with physiological arousal. At the neural level, three connectivity patterns emerged and remained significant after sequential family-wise error correction. Behavioural responses modulated by subclinical anxiety were linked to stronger connectivity between the anterior cingulate cortex (ACC) and insula. Physiological modulation was associated with connectivity between the ACC and orbitofrontal cortex (OFC). Subjective anxiety was associated with increased connectivity between the hippocampus and insula. Additional connections were observed but did not survive stricter correction. Overall, the findings indicate that behavioural, physiological, and subjective aspects of subclinical anxiety map onto partially dissociable but overlapping intrinsic brain networks, extending prior task-based results to resting-state connectivity and informing future work on early neural markers of anxiety.
0
0
q-bio.NC 2026-05-01

Consciousness model blueprint reaches 72% on sarcasm and humor tasks

CTM-AI: A Blueprint for General AI Inspired by a Model of Consciousness

By selecting and integrating outputs from many foundation-model processors, CTM-AI also gains over 10 points on tool-use and web-agent tests

Figure from the paper full image
abstract click to expand
Despite remarkable advances, today's AI systems remain narrow in scope, falling short of the flexible, adaptive, and multisensory intelligence that characterizes human capabilities. This gap has fueled longstanding debates about whether AI might one day achieve human-like generality or even consciousness, and whether theories of consciousness can inspire new architectures for AI. This paper presents an early blueprint for implementing a general AI system, CTM-AI, combining the Conscious Turing Machine (CTM), a formal machine model of consciousness, with today's foundation models. CTM-AI contains an enormous number of powerful processors ranging from specialized experts (e.g., vision-language models and APIs) to unspecialized general-purpose learners poised to develop their own expertise. Crucially, for whatever problem must be dealt with, information from many processors is selected, integrated, and exchanged appropriately to solve the task. CTM-AI achieves state-of-the-art accuracy on MUStARD (72.28) and UR-FUNNY (72.13), outperforming multimodal and multi-agent frameworks. On tool-using and agentic tasks, CTM-AI achieves 10+ points of improvement on StableToolBench and WebArena-Lite. Overall, CTM-AI offers a principled, testable blueprint for general AI inspired by a model of consciousness.
0
0
q-bio.NC 2026-05-01

Flies recruit visual neurons into odor memory engrams

Multisensory learning recruits visual neurons into an olfactory memory engram

Multisensory training broadens the engram so either color or scent alone can trigger the full memory.

abstract click to expand
Associating multiple sensory cues with a single experience or object is a fundamental process that improves object recognition and memory performance. However, neural mechanisms that bind sensory features during learning and augment memory expression are unknown. Here we demonstrate multisensory appetitive and aversive memory in Drosophila. Combining colours and odours improved memory performance, even when each sensory modality was tested alone. Temporal control of neuronal function revealed visually-selective mushroom body Kenyon Cells (KCs) to be required for enhancement of visual and olfactory memory recall after multisensory training. Synapse-level connectomics suggests that valence-relevant dopaminergic reinforcement could permit the KC-spanning serotonergic DPM neurons to bridge between previously modality-selective KC streams. Consistent with this model, DPM transmission is uniquely required during multisensory memory formation and for enhanced expression of olfactory memory afterwards. In addition, signalling via the DopR1 dopamine receptor is required in APL neurons, suggesting that reinforcing dopamine could locally release GABA-ergic inhibition to permit bridging microcircuits to function. Cross-modal binding thereby expands the KCs representing the olfactory memory engram into those representing the colour. We propose that broadening of the engram improves memory performance after multisensory learning and permits a single sensory feature to retrieve the memory of the multimodal experience.
0
0
q-bio.NC 2026-05-01

Rescorla-Wagner learning equals Bayesian inference in symmetric bandits

On Agentic Behavioral Modeling

Agentic behavioral modeling treats artificial agents as generative hypotheses tested against human perceptual and learning data.

Figure from the paper full image
abstract click to expand
Integrating theoretical neuroscience, decision theory, and probabilistic inference offers a promising route to understanding human cognition, yet concrete methodological bridges between agentic AI models and behavioral data analysis remain formally underdeveloped. We advance this synthesis under the framework of agentic behavioral modeling (ABM), which treats artificial agents as latent, generative hypotheses about cognitive mechanisms and evaluates them by their statistical adequacy in explaining human behavior. After outlining its conceptual foundations, we apply the framework to two minimal laboratory paradigms: a binary perceptual contrast-discrimination task and a symmetric two-armed bandit learning task. We formalize each task-agent-data system as a joint probability model, derive explicit conditional log-likelihoods for behavioral inference, validate different model variants using model and parameter recovery simulations, and evaluate them in light of empirical data. Using these minimal examples, we provide an agent-centric interpretation of the psychometric function, derive optimal policies for both tasks, and show the equivalence between Rescorla-Wagner learning and Bayesian inference in symmetric bandits. More broadly, this work may serve as a conceptual and practical foundation for applying ABM to cognitive behavioral science.
0
0
q-bio.NC 2026-05-01

Single videos of infants yield humanoid sensor streams

Simulating Infant First-Person Sensorimotor Experience via Motion Retargeting from Babies to Humanoids

3D pose extraction maps baby motions to platforms producing proprioception, touch and vision data at sub-centimeter accuracy.

Figure from the paper full image
abstract click to expand
Motion retargeting from humans to human-like artificial agents is becoming increasingly important as humanoid robots grow more capable. However, most existing approaches focus only on reproducing kinematics and ignore the rich sensorimotor experience associated with human movement. In this work, we present a framework for simulating the multimodal sensorimotor experiences of infants using physical and virtual humanoids. From a single video, our method reconstructs the infant's body configuration by extracting its skeletal structure and estimating the full 3D pose from each frame. Then we map the reconstructed motion onto several developmental platforms: the physical iCub robot and the virtual simulators pyCub, EMFANT and MIMo. Replaying the retargeted motions on these embodiments produces simulated multisensory streams including proprioception (joints and muscles), touch, and vision. For the best-matching embodiment, the retargeting achieves sub-centimeter accuracy and enables a rich multimodal analysis of infant development as well as enhanced automated annotation of behaviors. This framework provides a unique window into the infant's sensorimotor experience, offering new tools for robotics, developmental science, and early detection of neurodevelopmental disorders. The code is available at https://github.com/ctu-vras/motion-retargeting/.
0
0
q-bio.NC 2026-04-29

Cortical surface eigenmodes sharpen whole-brain EEG/MEG maps

A geometry aware framework enhances noninvasive mapping of whole human brain dynamics

Participant-specific geometric modes resolve the inverse problem and recover fast dynamics aligned with anatomical pathways.

Figure from the paper full image
abstract click to expand
Non-invasive electrophysiology lacks methods that accurately reconstruct whole-brain spatiotemporal dynamics while incorporating individual cortical geometry, leaving current electroencephalography and magnetoencephalography source imaging limited by simplistic or biologically implausible priors. Here, we show that embedding participant-specific Geometric Basis Functions (GBFs), eigenmodes derived from each individual's cortical surface, provides a powerful anatomic constraint that resolves the inverse problem and improves reconstruction fidelity. The method reconstructs neural sources as linear combinations of geometric basis functions, thereby aligning source estimates with the geometric organization of neural dynamics. We validate GBF across the Meta-Source Benchmark, task-evoked data, resting-state networks, intracranial stimulation, and epilepsy data. The results demonstrate that GBF yields high localization accuracy and captures fast spatiotemporal dynamics consistent with anatomical pathways. These findings suggest that both spontaneous and evoked whole-brain activity can be described by hundreds of geometric modes, providing a compact yet accurate representation of neural sources. By linking cortical geometry to electrophysiological dynamics, GBF offers a versatile source imaging tool for both scientific and clinical applications.
0
0
q-bio.NC 2026-04-29

AI chatbots catch psychiatric emergencies but over-triage milder cases

One-shot emergency psychiatric triage across 15 frontier AI chatbots

112-vignette tests reveal near-perfect emergency detection but net over-triage of lower-risk psychiatric cases.

Figure from the paper full image
abstract click to expand
AI chatbots are increasingly used for health advice, but their performance in psychiatric triage remains undercharacterized. Psychiatric triage is particularly challenging because urgency must often be inferred from thoughts, behavior, and context rather than from objective findings. We evaluated the performance of 15 frontier AI chatbots on psychiatric triage from realistic single-message disclosures using 112 clinical vignettes, each paired with 1 of 4 original benchmark triage labels: A, routine; B, assessment within 1 week; C, assessment within 24 to 48 hours; and D, emergency care now. Vignettes covered 9 psychiatric presentation clusters and 9 focal risk dimensions, organized into 28 presentation-by-risk groups. Each group contributed 4 distinct vignettes, with 1 vignette at each triage level. Each vignette was rendered as a realistic human-authored conversational query, and the AI chatbots were tasked with assigning a triage label from that disclosure. Emergency under-triage occurred in 23 of 410 level D trials (5.6%), and all under-triaged emergencies were reassigned to level C urgency. Across target models, average accuracy ranged from 42.0% to 71.8%. Accuracy was highest for level D vignettes (94.3%) and lowest for level B vignettes (19.7%). Mean signed ordinal error was positive (+0.47 triage levels), indicating net over-triage. Dispersion was highest around the middle triage levels. All results were confirmed relative to clinician consensus labels from 50 medical doctors. When presented with user messages containing sufficient clinical information, frontier AI chatbots thus recognized psychiatric emergencies as requiring urgent medical assessment with near-zero error rates, yet showed marked over-triage for low and intermediate risk presentations.
0
0
q-bio.NC 2026-04-28

Twin models with repeated-scan error terms structure genetic effects on brain connections

The Genetic and Environmental Architecture of the Human Functional Connectome

Correcting for measurement noise in fMRI data separates additive, dominant, and environmental influences into coherent multiscale networks.

Figure from the paper full image
abstract click to expand
Functional connectivity varies across individuals due to genetic and environmental factors, yet classical twin models typically confound non-shared environment with measurement error and are largely limited to resting-state analyses. We hypothesized that: i) explicitly modeling measurement error from repeated fMRI sessions enables more accurate application of classical twin models (ACE/ADE) to functional connectivity; ii) model applicability depends on scan-length and parcellation granularity; iii) genetic and environmental effects on functional connectomes show differentiated functional modules across conditions. We extended ACE/ADE models to include a repeated-scan derived error term by analyzing monozygotic and dizygotic twins from the Young-Adult Human Connectome Project dataset. Genetic and environment variance components were estimated for all functional couplings across resting-state and task conditions, integrated across conditions using a minimum-error criterion, and analyzed using multilayer community detection across resolution scales. Functional couplings segregated into distinct categories characterized by shared environmental, additive, dominant, or epistatic influences, with a substantial fraction not meeting twin-model assumptions. Integrating across conditions revealed hierarchical community structure in genetic and environmental components observed across community resolution scales. Incorporating measurement error into twin models improves interpretability and applicability at the functional connectome level, revealing that genetic and environmental influences are structured into coherent, multiscale brain networks.
0
0
q-bio.NC 2026-04-28

Real-time line assignment for reading gaze closes offline gap to 1-2%

Sure About That Line? Approaching Confidence-Based, Real-Time Line Assignment in Reading Gaze Data

CONF-LA scores each fixation with Gaussian likelihoods and reading priors then defers uncertain cases at 0.348 ms latency.

Figure from the paper full image
abstract click to expand
Remote and webcam-based eye tracking in multi-line reading suffers from various noise factors and layout ambiguity, precisely where real-time reading support needs reliable, per-fixation line assignment. Prior work largely addresses this challenge post hoc or by restricting behavior (e.g., disallowing re-reading), undermining interactive use. We propose CONF-LA (Confidence-score-based Online Fixation-to-Line Assignment), a principled, low-latency approach that integrates knowledge about reading behavior and Gaussian line likelihoods over fixations to compute a posterior-line-score and defers assignments when uncertainty is high. Evaluated on existing open-source data, CONF-LA demonstrates stable performance in post hoc analysis and closes the online-offline gap (1-2 %) with a mean per-fixation latency of 0.348 ms. Our approach exhibits particular invariance toward regressions, yielding significant improvement in ad hoc median accuracies on children data (approx. 95 %) over all tested algorithms. We encourage further research in this direction and discuss possibilities for future development.
0
0
q-bio.NC 2026-04-27

Behavior needs closed-loop brain-body-environment models

Integrative neurocybernetic modeling in the era of large-scale neuroscience

Integrative neurocybernetic models pool data constraints to reveal shared dynamical principles and control objectives.

Figure from the paper full image
abstract click to expand
Large-scale neuroscience is generating rich datasets across animals, brain areas and behavioral contexts, yet our modeling efforts remains fragmented across isolated experiments. We argue that understanding behavior requires integrative neurocybernetic models: understandable dynamical models that capture the closed-loop coupling of brain, body and environment, treat the brain as a controller pursuing latent objectives, represent structured variation across scales, and scale to heterogeneous datasets. Such models shift the goal from predicting neural recordings in isolation to inferring the organizing principles that govern neural and behavioral dynamics. We outline a practical route toward this goal by combining nonlinear state-space models and meta-dynamical extensions with scalable inference, knowledge distillation, mixed open- and closed-loop training, and connectomics-informed architectures. By pooling complementary constraints from recordings, behavior, perturbations and anatomy, integrative neurocybernetic models can provide statistical amplification, few-shot generalization, and mechanistic insight into shared dynamical structure, individual variation, and the control objectives that govern behavior. This agenda offers a model-centric path from fragmented data to a mechanistic science of how brains produce behavior.
0
0
q-bio.NC 2026-04-27

Pupil size and fixations classify left vs right brain activity

EyeBrain: Left and Right Brain Lateralization Activity Classification Through Pupil Diameter and Fixation Duration

Eye metrics distinguish hemispheric dominance during cognitive tasks and reach 0.894 F1, enabling non-invasive monitoring.

Figure from the paper full image
abstract click to expand
The relationship between brain lateralization and cognitive functions is well-documented. The left hemisphere primarily handles tasks such as language and arithmetic, while the right hemisphere is involved in creative activities like drawing and music perception. Eye-tracking technology has shown the potential to reveal cognitive states by measuring ocular metrics such as pupil diameter and fixation duration. However, the ability to distinguish lateralized brain activity using these ocular metrics remains underexplored. Here, we demonstrate that pupil diameter and fixation duration can effectively classify left and right brain hemisphere activities. We obtained a considerably high classification performance, with an F1 score of 0.894. The results suggest that ocular metrics are robust indicators of lateralized brain activity and can be applied in cognitive monitoring and neurorehabilitation. Our future work expands on this by integrating these methods into real-time applications EyeBrain, potentially broadening their use across various cognitive and neurological domains.
0
0
q-bio.NC 2026-04-27

RNN separates brain activity into three network configurations

Triple Configuration of Brain Networks Based on Recurrent Neural Networks: The Synergistic Effects of Exogenous Stimuli, Task Demands, and Spontaneous Activity

Model of resting EEG data shows parietal hub integrates stimuli, tasks, and spontaneous activity.

Figure from the paper full image
abstract click to expand
The foundation of cognitive flexibility and higher-order intelligence lies in the functional structure and activity of brain networks, which can be dynamically configured by both external environments and internal states. However, decoding these dynamics from high-dimensional neural data remains a challenge. In this study, we propose a computational framework using Recurrent Neural Networks (RNNs) with neural dynamic constraints to model source-localized resting-state EEG data from $114$ participants. We aim to clarify the "triple brain network configurations" driven by exogenous and endogenous factors, including external stimuli, information processing tasks, and spontaneous activities. Our model identifies the parietal network as a critical hub supporting these multiple configuration patterns. Furthermore, we reveal that the anterior and posterior parietal regions exhibit distinct functional specializations under different stimulus modalities. By formalizing a triple configuration framework, this work separates latent factors of brain dynamics and underscores the computational significance of parietal regions in orchestrating higher-order intelligence.
0
0
q-bio.NC 2026-04-27

Vision works as looking before seeing through a V1 bottleneck

Vision as looking and seeing through a bottleneck

Primary visual cortex creates a saliency map that selects content for recognition by guiding where the eyes move next.

Figure from the paper full image
abstract click to expand
Progress in vision research has been slower downstream than upstream of primary visual cortex (V1). Traditional frameworks have largely overlooked a central constraint: only a tiny fraction of retinal input is recognized. Thus, to a first approximation, vision is better formulated as looking and seeing through a bottleneck. Looking, mainly by the peripheral visual field, selects visual information to enter this bottleneck, largely via gaze shifts that center selected contents at fovea. Seeing, mainly by the central visual field, recognizes this content. Converging evidence suggests that V1 initiates the bottleneck and contributes to looking by generating a bottom-up saliency map that guides saccades exogenously, and that top-down feedback along the visual pathway, targeting mainly the representation of the central visual field, refines seeing. Progress will accelerate through falsifiable theories that explicitly link behavior with neural substrates, and by experimental designs that avoid forced fixation and precisely track gaze.
0
0
q-bio.NC 2026-04-27

V1 builds saliency map to steer saccades and starts vision bottleneck

What are the functions of primary visual cortex (V1)?

Vision works by selecting narrow slices of the scene through eye movements rather than processing the whole view at once.

Figure from the paper full image
abstract click to expand
Although Hubel and Wiesel established decades ago how individual V1 neurons transform retinal inputs, functions of V1 as a whole are being discovered only recently. First, V1 acts as a motor cortex for exogenously guiding saccades by constructing a bottom-up saliency map of the visual field. Second, V1 initiates a processing bottleneck: a massive reduction of visual information begins at its output to downstream areas. Third, downstream recognition is limited by impoverished information, V1 supports ongoing recognition by providing additional information queried by top-down feedback from downstream areas, directed predominantly to central visual field representations. These V1 functions underpin a framework in which vision is mainly looking and seeing through the bottleneck. Looking selects a fraction of visual information into the bottleneck, largely by saccades that center selected contents at gaze. Seeing recognizes the selected contents. Looking and seeing rely mainly on processing in the peripheral and central visual fields.
0
0
q-bio.NC 2026-04-27

EEG flags early pre-configuration failure in repetitive subconcussion

Early Preconfiguration Failure: A Novel Predictor of the Repetitive Subconcussion

Millisecond integration patterns drop in patients and let classifiers separate them from controls and chronic TBI cases.

abstract click to expand
Early diagnosis and assessment of repetitive subconcussive (rSC) brain injuries are crucial for early clinical intervention. Conventional methods, largely relying on slow fMRI, fail to capture millisecond-level early cortical dynamics, particularly spatiotemporal features associated with pre-configuration dynamics. This study introduces a novel approach integrating dynamic hierarchical spatial features and cortical early behavioral time-domain sensitivity, utilizing EEG and visual attention tasks. We analyzed cortical early behaviors in 24 healthy controls (HC), 21 rSC patients,and a validation cohort of 25 cTBI patients from public datasets. Results reveal distinct temporal patterns in HC: elevated integration at 0-100 ms, rebound dynamics at 100-200ms, and visual perception integration peaks at 200-600 ms. In contrast, rSC patients exhibited significantly impaired dynamic features, with reduced integration levels indicating a decline in pre-configuration dynamics. Signed center distance (SCD) analysis of separation-integration trajectories showed significantly lower early SCD values in rSC patients compared to HC, while cTBI patients displayed negative SCD values, reflecting irreversible damage. Machine learning classification achieved optimal performance in distinguishing between HC, rSC, and cTBI groups using early cortical features, highlighting the critical role of millisecond-level cortical dynamics in rSC diagnosis.
0
0
q-bio.NC 2026-04-27

Ear device records brain signals while playing audio

Earable Platform with Integrated Simultaneous EEG Sensing and Auditory Stimulation

Custom-molded earpiece detects eye movements and alpha waves during sound delivery for potential real-time brain control of audio.

Figure from the paper full image
abstract click to expand
Conventional scalp-based EEG systems are cumbersome to use, requiring extensive setup, restrictive wiring, and conductive gels that can dry out and limit long-term monitoring, while also carrying social stigma. As a result, there is increasing interest in in-ear EEG technology to improve comfort, convenience, and discretion for users. This work presents a personalized in-ear EEG monitor (IEEM) that simultaneously captures EEG signals from the outer ear while delivering audio playback through the same device. The earpiece is custom-molded to precisely match the user's ear anatomy, providing effective sound isolation from the environment and enabling direct audio transmission into the ear canal. Testing of the assembled earpiece shows successful detection of electrooculography (EOG), eye blinks, jaw clenches, auditory steady-state responses (ASSR), and alpha modulation. Electrochemical impedance spectroscopy (EIS) measurements confirm stable electrode-skin contact, with impedance values similar to those of traditional dry electrodes. The integrated approach enables potential closed-loop neuromodulation applications all in the ear where brain activity can be monitored in real-time and corresponding acoustic stimulation delivered adaptively.
0
0
q-bio.NC 2026-04-24

Stable brain-wave decay marks tinnitus across EEG setups

Resting-State EEG Biomarkers of Tinnitus Robust to Cross-Subject and Cross-Platform Variation

Koopman eigenvalue magnitude generalizes better than frequency or microstate features in two datasets.

Figure from the paper full image
abstract click to expand
Tinnitus is a prevalent auditory condition lacking objective biomarkers, motivating the search for reliable neural signatures. EEG, being a noninvasive method of brain imaging with a high temporal resolution provides a way to investigate the neural dynamics that may be associated with tinnitus. The generalizability of EEG-based tinnitus biomarkers across different datasets remains a critical challenge. Microstate theory has allowed for the characterization of quasi-stable topographic configurations in EEG, with some studies reporting altered microstate dynamics in tinnitus patients. This work seeks to improve upon existing dynamical systems analysis and their viability in identifying a robust biomarker. Dynamical features were extracted from two resting-state EEG datasets for the binary classification of tinnitus. Here, robustness is quantified as cross-dataset generalization, which is critical for clinical translation. We employ microstate analysis by identifying topographic states, from which transition probability and state duration features are derived. We also apply Koopman operator analysis through Dynamic Mode Decomposition (DMD) to dimensionality-reduced EEG to extract features in single-window. A linear SVM is trained on each feature set and evaluated in a cross-dataset generalization paradigm. PCA-based Koopman features yield the strongest discrimination metrics across both transfer directions, outperforming microstate-derived features. A Wasserstein-distance consistency analysis further reveals that Koopman eigenvalue \emph{magnitude}, encoding oscillation stability, generalizes across datasets ($\bar{\rho} = 0.685$), whereas eigenvalue \emph{phase}, encoding oscillation frequency, does not ($\bar{\rho} = 1.583$), providing interpretable evidence that altered oscillatory decay rates, rather than frequency shifts, constitute the more robust tinnitus biomarker.
0
0
q-bio.NC 2026-04-24

Bounded noise turns regular neuron firing into irregular bursts

Noise-accelerated Kramers Escape and Coherence Resonance in a 5D Neural Manifold

In a 5D cortical pacemaker model, extreme multiplicative noise accelerates escape from the slow manifold and drives high-frequency irregular

Figure from the paper full image
abstract click to expand
Intrinsic channel noise is fundamental to neural processing, yet its state-dependent nature, when constrained by strict Feller boundary conditions, is often overlooked. Here, we demonstrate that this bounded multiplicative noise is not merely a source of jitter but an active dynamical force that fundamentally reshapes neural excitability. Investigating a 5D Hodgkin-Huxley-type cortical pacemaker model, we utilize a full-truncation semi-implicit Euler scheme to ensure rigorous probability conservation and domain-preserving integration. Through comprehensive parameter sweeps, we uncover a rich triphasic landscape of noise-induced transitions dictated by the underlying bifurcation structure. Deep in the subthreshold regime, multiplicative noise acts as a constructive force, triggering stochastic awakening via Kramers escape. Near the subcritical Hopf bifurcation, this evolves into highly robust coherence resonance (CR). Crucially, in the supra-threshold oscillatory regime, our framework reveals a striking dynamical shift: a generalized, noise-accelerated Kramers escape. Under extreme multiplicative noise - characteristic of sparse channel populations - strictly bounded fluctuations actively amplify escape rates from the hyperpolarized slow manifold, transforming regular pacing into high-frequency, irregular bursting. Conductance perturbation experiments confirm the profound biological robustness of this transition. These findings establish a physically rigorous mechanism for how boundary-constrained noise drives high-dimensional oscillators toward states of pathological hyperexcitability.
0
0
q-bio.NC 2026-04-24

Hub-LoRA tunes foundation models for faithful brain biomarkers

Foundation models for discovering robust biomarkers of neurological disorders from dynamic functional connectivity

Standard checks miss when models overlook key brain hubs; targeted adaptation yields results matching meta-analyses on autism, ADHD and Alz

Figure from the paper full image
abstract click to expand
Several brain foundation models (FM) have recently been proposed to predict brain disorders by modelling dynamic functional connectivity (FC). While they demonstrate remarkable model performance and zero- or few-shot generalization, the salient features identified as potential biomarkers are yet to be thoroughly evaluated. We propose RE-CONFIRM, a framework for evaluating the robustness of potential biomarker candidates elucidated by deep learning (DL) models including FMs. From experiments on five large datasets of Autism Spectrum Disorder (ASD), Attention-deficit Hyperactivity Disorder (ADHD), and Alzheimer's Disease (AD), we found that although commonly used performance metrics provide an intuitive assessment of model predictions, they are insufficient for evaluating the robustness of biomarkers identified by these models. RE-CONFIRM metrics revealed that simply finetuning FMs leads to models that fail to capture regional hubs effectively, even in disorders where hubs are known to be implicated, such as ASD and ADHD. In view of this, we propose Hub-LoRA (Low-Rank Adaptation) as a fine-tuning technique that enables FMs to not only outperform customised DL models but also produce neurobiologically faithful biomarkers supported by meta-analyses. RE-CONFIRM is generalizable and can be easily applied to ascertain the robustness of DL models trained on functional MRI datasets. Code is available at: https://github.com/SCSE-Biomedical-Computing-Group/RE-CONFIRM.
0
0
q-bio.NC 2026-04-24

Low-dispersion stimuli double vision-language model alignment

Modulating Cross-Modal Convergence with Single-Stimulus, Intra-Modal Dispersion

Images where vision models agree produce up to twice the cross-modal convergence as those where they diverge.

Figure from the paper full image
abstract click to expand
Neural networks exhibit a remarkable degree of representational convergence across diverse architectures, training objectives, and even data modalities. This convergence is predictive of alignment with brain representation. A recent hypothesis suggests this arises from learning the underlying structure in the environment in similar ways. However, it is unclear how individual stimuli elicit convergent representations across networks. An image can be perceived in multiple ways and expressed differently using words. Here, we introduce a methodology based on the Generalized Procrustes Algorithm to measure intra-modal representational convergence at the single-stimulus level. We applied this to vision models with distinct training objectives, selecting stimuli based on their degree of alignment (intra-modal dispersion). Crucially, we found that this intra-modal dispersion strongly modulates alignment between vision and language models (cross-modal convergence). Specifically, stimuli with low intra-modal dispersion (high agreement among vision models) elicited significantly higher cross-modal alignment than those with high dispersion, by up to a factor of two (e.g., in pairings of DINOv2 with language models). This effect was robust to stimulus selection criteria and generalized across different pairings of vision and language models. Measuring convergence at the single-stimulus level provides a path toward understanding the sources of convergence and divergence across modalities, and between neural networks and human neural representations.
0
0
q-bio.NC 2026-04-24

Criticality signatures organize along brain hierarchy with opposite static and dynamic

Hierarchical organization of critical brain dynamics

In mouse visual cortex and hippocampus, static exponents and the dynamic exponent trend in opposite directions along the anatomical gradient

Figure from the paper full image
abstract click to expand
The hierarchical organization of the brain is a fundamental structural principle, while brain criticality is a leading hypothesis for its collective dynamics. However, the connection between structure and signatures of criticality remains an open question. Here, we address this issue by applying phenomenological renormalization group approaches to large-scale neuronal spiking activity from the mouse visual cortex and hippocampus. We find that signatures of criticality are not uniform, but instead vary systematically along the known anatomical hierarchy in both brain systems. Strikingly, the direction along this gradient is inconsistent across different criticality exponents, revealing a nontrivial, measure-dependent organization: exponents based on static properties point to a gradient in one direction, while the exponent based on dynamic properties points in the opposite direction. Moreover, the signatures across the visual system are strongly modulated by the engagement in a visual task. We show that the correlations among criticality markers of different brain regions during active engagement are sufficient to reconstruct the anatomical hierarchy from the dynamics. Scaling exponents closely follow a theoretically predicted scaling relation among them, and covary with the hierarchical position. Our findings provide a direct link between the collective dynamics of neurons and the macroscopic architecture of the brain.
0
0
q-bio.NC 2026-04-24

Brain cross-region patterns stable but missed by AI models

Only Brains Align with Brains: Cross-Region Alignment Patterns Expose Limits of Normative Models

Models that predict responses in individual areas still fail to capture how regions relate to each other across subjects.

Figure from the paper full image
abstract click to expand
Neuroscientists and computer vision researchers use model-brain alignment benchmarks to compare artificial and biological vision systems. These benchmarks rank models according to alignment measures such as the similarity of representational geometry or the predictability of neural responses from model activations. However, recent works have identified a number of problems with these rankings, among them their lack of discriminative power and robustness, raising the conceptual question of what it means for a model to be brain-aligned. Here we introduce alignment patterns -- characteristic functional relationship profiles of each brain region to all others -- and propose that models should reproduce these patterns to qualify as brain-aligned. First, we apply a standard benchmarking pipeline to a broad spectrum of vision models of the BOLD Moments video fMRI dataset across visual regions of interest (ROIs). We find diverse models appear equivalent in their brain alignment, reflecting the lack of discriminative power of conventional alignment benchmarking pipelines. In contrast, alignment pattern analysis (APA) is a second-order structural consistency test: a model aligned to a given ROI should reproduce that ROI's characteristic cross-region alignment profile. Applying APA, we find that, while these patterns are highly stable across brains of different subjects, even top-ranked models often fail to capture them. Finally, we argue for a clearer distinction between the criteria a model must meet to serve as a tool versus as a computational model for human visual cortex. Conventional alignment measures may be sufficient for identifying neurally predictive models, but claims about computational or algorithmic similarity may require a stronger basis of evidence, including the reproducibility of relational alignment patterns.
1 0
0
q-bio.NC 2026-04-23

LPC networks reach near-minimal response times with modular designs

Response time of lateral predictive coding and benefits of modular structures

Tuning recurrent interactions cuts response latency to the theoretical floor while error, robustness, and accuracy stay the same as in dense

Figure from the paper full image
abstract click to expand
Lateral predictive coding (LPC) is a simple theoretical framework to appreciate feature detection in biological neural circuits. Recent theoretical work [Huang et al., Phys.Rev.E 112, 034304 (2025)] has successfully constructed optimal LPC networks capable of extracting non-Gaussian hidden input features by imposing the tradeoff between energetic cost and information robustness, but the resulting dynamical systems of recurrent interactions can be very slow in responding to external inputs. We investigate response-time reduction in the present paper. We find that the characteristic response time of the LPC system can be minimized to closely approaching the lower-bound value without compromising the mean predictive error (energetic cost) and the information robustness of signal transmission. We further demonstrate that optimal LPC networks taking a modular structural organization with extensively reduced number of lateral interactions are equally excellent as all-to-all completely connected networks, in terms of feature detection performance, response time, energetic cost and information robustness.
0
0
q-bio.NC 2026-04-23

Organic materials enable field-free quantum computing

The γ_c-Peak: Covariant Recovery on Four Organic Qubit Platforms

Four SVILC paths meet qubit conditions and deliver measurable quantum advantage in algorithm benchmarks

Figure from the paper full image
abstract click to expand
The Petz recovery map (1986) provably reverses a noisy quantum channel on a reference state, but its algorithmic relevance to real, dissipation-dominated platforms has remained unclear. Using the open-source \texttt{organic-qc-bench} simulation package, we benchmark a Petz-style covariant-purification quantum error correction (CQEC) protocol across four engineered organic qubit platforms operated \emph{without any magnetic field}: a flavin-nitroxide radical-pair reservoir (P1); perchlorotriphenylmethyl radicals in a covalent organic framework (P2); the SVILC qubit [Wakaura2017] on $\kappa$-(BEDT-TTF)$_2$Cu[N(CN)$_2$]Br (P3, conditional on SVILC confirmation); and a Su-Schrieffer-Heeger soliton on \emph{trans}-polyacetylene (P4). Across five quantum algorithms (QKAN, qDRIFT, control-free QPE, Shor-Regev, Bernstein-Vazirani) and two ML tasks, CQEC gains are significant ($p\!<\!10^{-5}$; Wilcoxon, Bonferroni $\alpha\!=\!0.05/44$) for all sixteen path$\times$algorithm pairs. The central finding is the \emph{$\gamma_c$-peak}: the fidelity gain $\Delta F$ is maximised \emph{at} the entanglement-breaking threshold $\gamma_c$, with $\Delta F_{\rm max}\!=\!+0.303$ at $d\!=\!64$ and a linear $\log_2 d$ scaling over $d=2$-$64$ -- algorithmically confirming the prediction [Wakaura2026LQBH] that Petz recovery preserves coherence beyond this threshold. Bernstein-Vazirani also yields a $7.6$-$31\times$ provable quantum advantage at $n\!=\!3$-$5$, diarylethene-photoswitch CZ fidelities reach $F_{CZ}\!\ge\!0.987$ for P2-P4, and projected manufacturing costs are 10-40$\times$ lower with 10-200$\times$ less operating power than superconducting platforms. The $\gamma_c$-peak establishes Petz-style recovery as a practically relevant primitive at the dissipation-coherence boundary and identifies PTM-COF (P2) as the highest-priority experimental target.
0
0
q-bio.NC 2026-04-23

Decorrelation reduces brain-to-text WER from 26.3% to 21.6%

MoDAl: Self-Supervised Neural Modality Discovery via Decorrelation for Speech Neuroprosthesis

Preventing multiple neural encoders from duplicating each other allows unique syntactic information from area 44 to improve speech decoding.

Figure from the paper full image
abstract click to expand
Speech neuroprosthesis systems decode intended speech from neural activity in the absence of audible output, offering a path to restoring communication for individuals with speech-impairing conditions. Current approaches decode predominantly from motor cortical areas, discarding others -- such as area 44, part of Broca's area -- that may encode complementary linguistic information. We introduce MoDAl (Modality Decorrelation and Alignment), a framework that discovers complementary neural modalities through the interplay of two objectives in a shared projection space. A contrastive loss aligns each of several parallel brain encoders with the text embeddings of a pretrained large language model (LLM), while a decorrelation loss prevents the encoders from coalescing to duplicative representations. We prove that these objectives are in productive tension: Contrastive alignment induces transitive modality coalescence, which decorrelation must counteract for the framework to discover diverse neurolinguistic modalities. On the Brain-to-Text Benchmark '24, MoDAl reduces word error rate (WER) from 26.3% to 21.6% compared to the previous best end-to-end method, with the gain from incorporating previously discarded area 44 signals arising entirely from the decorrelation mechanism. Analysis of the discovered modalities reveals functional specialization: Encoders receiving area 44 input capture structural and syntactic properties (sentence length, grammatical voice, wh-words), consistent with the neurolinguistic understanding of Broca's area.
0
0
q-bio.NC 2026-04-22

Raw EEG exposes brain-body resonance sustaining critical dynamics for consciousness

Self-organized criticality enables conscious integration through brain-body resonance

Standard cleaning erases power-law avalanches and integrative coupling; unfiltered signals show 78 ms bidirectional resonance and spatial h

Figure from the paper full image
abstract click to expand
The "binding problem" of how distributed neural activity unifies into conscious experience has remained an open challenge since its articulation in 1890. We present evidence that conscious integration relies on self-organized criticality maintained by brain-body resonance, placing human cognition within the universality class of critical systems. Using 64-channel EEG data, we demonstrate that conventional preprocessing inadvertently eliminates the very integrative dynamics it seeks to measure. Removing physiological signals conventionally treated as "artifacts" drastically reduces the shared variance between global phase synchronization and stimulus-evoked amplitude, an effect highly specific to physiological components. We trace this to a fundamental brain-body resonance at 78 milliseconds that establishes zero-lag synchronization driven by robust bidirectional causality. Crucially, raw data exhibits heavy-tailed avalanche dynamics indicative of a near-critical regime, whereas conventionally cleaned data definitively rejects power-law distributions, signaling an artificial shift to subcriticality. Finally, we show these critical dynamics enable holographic information encoding, evidenced by a significant emergence of spatial interference patterns post-resonance. Together, these findings indicate that physiological signals actively and selectively support the coupling between large-scale neural coordination and event-related processing.
0
0
q-bio.NC 2026-04-22

Dynamical Bayesian model reproduces order biases in touch perception

Modelling time-order effects in haptic perception with a Bayesian dynamical framework

An evolving internal intensity representation explains the direction and size of time-order effects with few parameters per subject.

abstract click to expand
Perceptual judgments of sequential stimuli are systematically biased by prior expectations and by the temporal structure of sensory input. In haptic discrimination tasks, these effects often manifest as time-order asymmetries, whereby the perceived difference between two stimuli depends on their presentation order. Here, we introduce a dynamical Bayesian model that accounts for these biases by combining noisy sensory measurements with an evolving internal representation of stimulus intensity. The model formalizes perception as an inference process in which prior expectations are updated by incoming stimuli and propagate in time between observations. We test the model on psychophysical data from vibrotactile discrimination experiments, in which participants compare pairs of sequential stimuli with varying intensities. With a small number of parameters, the model quantitatively reproduces both the direction and magnitude of time-order effects across subjects, as well as the observed inter-individual variability. The inferred parameters provide a compact description of perceptual biases in terms of prior expectations and noise characteristics. Beyond fitting the data, the model induces a transformation of stimulus space, leading to a subject-dependent geometry of perceived stimuli. In this transformed space, perceptual judgments exhibit approximate symmetries that are absent in the physical stimulus coordinates. These results suggest that temporal biases in perception can be understood as a consequence of dynamical inference, and that they impose non-trivial geometric constraints on perceptual representations.
0
0
q-bio.NC 2026-04-21

Mouse brain models scale with data but plateau with size

OmniMouse: Scaling properties of multi-modal, multi-task Brain Models on 150B Neural Tokens

150 billion neural tokens reveal a data-limited regime unlike language and vision AI.

Figure from the paper full image
abstract click to expand
Scaling data and artificial neural networks has transformed AI, driving breakthroughs in language and vision. Whether similar principles apply to modeling brain activity remains unclear. Here we leveraged a dataset of 3.1 million neurons from the visual cortex of 73 mice across 323 sessions, totaling more than 150 billion neural tokens recorded during natural movies, images and parametric stimuli, and behavior. We train multi-modal, multi-task models that support three regimes flexibly at test time: neural prediction, behavioral decoding, neural forecasting, or any combination of the three. OmniMouse achieves state-of-the-art performance, outperforming specialized baselines across nearly all evaluation regimes. We find that performance scales reliably with more data, but gains from increasing model size saturate. This inverts the standard AI scaling story: in language and computer vision, massive datasets make parameter scaling the primary driver of progress, whereas in brain modeling -- even in the mouse visual cortex, a relatively simple system -- models remain data-limited despite vast recordings. The observation of systematic scaling raises the possibility of phase transitions in neural modeling, where larger and richer datasets might unlock qualitatively new capabilities, paralleling the emergent properties seen in large language models. Code available at https://github.com/enigma-brain/omnimouse.
1 0
0
q-bio.NC 2026-04-21

Brain and AI representations align from shared constraints

The Umwelt Representation Hypothesis: Rethinking Universality

The Umwelt Representation Hypothesis attributes similarities to overlapping ecological pressures during development rather than convergence.

abstract click to expand
Recent studies reveal striking representational alignment between artificial neural networks (ANNs) and biological brains, leading to proposals that all sufficiently capable systems converge on universal representations of reality. Here, we argue that this claim of Universality is premature. We introduce the Umwelt Representation Hypothesis (URH), proposing that alignment arises not from convergence toward a single global optimum, but from overlap in ecological constraints under which systems develop. We review empirical evidence showing that representational differences between species, individuals, and ANNs are systematic and adaptive, which is difficult to reconcile with Universality. Finally, we reframe ANN model comparison as a method for mapping clusters of alignment in ecological constraint space rather than searching for a single optimal world model.
0
0
q-bio.NC 2026-04-20

Open quantum dynamics enable escape from Nash traps in decisions

Quantum-Like Models of Cognition and Decision Making: Open-Systems and Gorini--Kossakowski--Sudarshan--Lindblad Dynamics

Non-commuting Hamiltonians and cognitive beats model how minds avoid classical equilibria and time internal conflicts.

Figure from the paper full image
abstract click to expand
This paper starts with surveying the evolution of quantum-like models of cognition and decision making, transitioning from static kinematic representations to a robust dynamical framework based on open quantum systems. We provide a comprehensive analysis of the Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) master equation's application in cognitive psychology and decision making, illustrating how it models mental state evolution as a dissipative process influenced by an informational environment. We categorize dynamical regimes into Passive and Active Hamiltonians, demonstrating how non-commutation with projections on decision basis serves as a mathematical signature of cognitive agency and Quantum Escape from classical equilibria. The utility of this framework is further explored through its ability to stabilize non-Nash outcomes in strategic games, such as the Prisoner's Dilemma. Building upon this dynamical foundation, we identify ``cognitive beats'' as a signature of the internal struggle between competing ``flows of mind'' deliberated at approximately equal frequencies. Distinct from the damped oscillations of simple interference, these beats emerge from a structural tension between Liouvillian channels that generates a secondary, slow-scale modulation of conviction. This beat envelope dictates the timing of peak readiness and hesitation, providing a mathematical map of the transition between conflicting cognitive states. By resolving these nested time scales, we provide a new spectral diagnostic for the depth of cognitive agency and the complexity of the underlying deliberation process. This paper develops a theoretical framework linking GKSL dynamics with quantum-like cognition and decision-making (QCDM), highlighting how dissipative quantum models can capture features of human thought and decision processes.
0
0
q-bio.NC 2026-04-20

Poisson flow organizes cortical folds from curvature gradients

Poisson Flow Model of Cortical Folding Pattern

Smooth scalar field derived from mean curvature gradients enables coherent sulcal-gyral analysis in epilepsy

abstract click to expand
Cortical folding reflects coordinated neurodevelopmental processes and provides a sensitive marker of neurological disease. In juvenile myoclonic epilepsy (JME), structural abnormalities are subtle and spatially distributed, limiting the sensitivity of conventional morphometric measures such as cortical thickness. We introduce a Poisson flow model derived from gradients of the mean curvature field on the cortical surface. The method yields a smooth scalar field obtained from a Poisson equation, whose surface gradient defines a flow representation of folding organization. This representation enables spatially coherent characterization of sulcal--gyral patterns and provides a principled geometric framework for studying distributed cortical alterations in JME.
1 0
0
q-bio.NC 2026-04-20

Neuroscience principles address AI's three core capability gaps

NeuroAI and Beyond: Bridging Between Advances in Neuroscience and ArtificialIntelligence

Workshop roadmap links brain mechanisms to fixes for physical interaction, brittle learning, and energy costs while calling for cross-field

abstract click to expand
Neuroscience and Artificial Intelligence (AI) have made impressive progress in recent years but remain only loosely interconnected. Based on a workshop convened by the National Science Foundation in August 2025, we identify three fundamental capability gaps in current AI: the inability to interact with the physical world, inadequate learning that produces brittle systems, and unsustainable energy and data inefficiency. We describe the neuroscience principles that address each: co-design of body and controller, prediction through interaction, multi-scale learning with neuromodulatory control, hierarchical distributed architectures, and sparse event-driven computation. We present a research roadmap organized around these principles at near, mid, and long-term horizons. We argue that realizing this program requires a new generation of researchers trained across the boundary between neuroscience and engineering, and describe the institutional conditions: interdisciplinary training, hardware access, community standards, and ethics, needed to support them. We conclude that NeuroAI, neuroscience-informed artificial intelligence, has the potential to overcome limitations of current AI while deepening our understanding of biological neural computation.
0
0
q-bio.NC 2026-04-20

Energy flow model detects cyclic causality in brain networks

Causality as a Minimum Energy Principle

Hodge decomposition isolates stable loops in fMRI data that acyclic models like Granger causality miss.

abstract click to expand
Classical causal models, such as Granger causality and structural equation modeling, are largely restricted to acyclic interactions and struggle to represent cyclic and higher-order dynamics in complex networks. We introduce a causal framework grounded in a variational principle, interpreting causality as directional energy flow from high- to low-energy states along network connections. Using Hodge theory, network flows are decomposed into dissipative components and a persistent harmonic component that captures stable cyclic interactions. Applied to resting-state fMRI connectivity, our variational framework reveals robust cyclic causal patterns that are not detected by conventional causal models, highlighting the value of variational principles for causality.
1 0
0
q-bio.NC 2026-04-20

Chloride influx fraction sets seizure stage transitions

Role of chloride concentration in modulating seizure transitions in excitatory and inhibitory networks

Varying the portion of inhibitory conductance driving chloride entry organizes network activity into pre-ictal, tonic, and clonic phases.

Figure from the paper full image
abstract click to expand
Experimental evidence indicates that intracellular chloride concentration regulates the excitation and inhibition (EI) balance, yet the mechanisms by which activity-dependent chloride dynamics drive seizure evolution and stage transitions remain unclear. We present a conductance-based neuronal network in which EI balance emerges from chloride homeostasis via channel-mediated influx and transporter-mediated extrusion. We show that the fraction of inhibitory synaptic conductance contributing to channel-mediated influx acts as a control parameter that organizes seizure dynamics into distinct stages,pre-ictal, ictal-tonic, and ictal-clonic,distinguished by characteristic amplitude and frequency signatures. Decreasing this fraction shortens ictal activity and suppresses seizure initiation, whereas high fraction promotes the emergence of ictal-tonic and ictal-clonic stages and spiral-wave dynamics, rendering seizure dynamics largely insensitive to inhibition. At intermediate values, seizures bypass the ictal-tonic stage and emerge directly as the icta,clonic stage. Moreover, joint variation of fractions with synaptic strengths reveals that recurrent excitation expands the tonic-clonic seizure, while recurrent inhibition prolongs pre-ictal states and suppresses ictal-clonic activity.
0
0
q-bio.NC 2026-04-17

Goxpyriment is a new open-source framework written in the Go language for creating and…

Goxpyriment: A Go Framework for Behavioral and Cognitive Experiments

A Go programming library produces self-contained experiment binaries with built-in stimuli, audio, and OS-level timing for psychology…

abstract click to expand
We introduce `Goxpyriment', a new open-source software framework for programming behavioral and cognitive experiments using the Go programming language. The library is designed to address some limitations of existing Python-based experiment tools, particularly the runtime environment complexity that frequently complicates deployment across laboratories. Because Go is a compiled language that can natively embed assets (e.g., graphics, audio files, and stimulus lists), Goxpyriment compiles entire experiments into single, self-contained executable binaries with zero runtime dependencies. This drastically simplifies distribution to collaborators and testing computers. The programming interface, inspired by Expyriment (Krause & Lindemann, 2014), was designed to be human friendly. The library includes an array of visual stimuli (text, shapes, images, Gabor patches, motion clouds, ...) and audio capabilities (WAV playback and tone generation). While developing Goxpyriment, we focused on timing reliability. Input events are timestamped by the operating system at hardware-interrupt time, so reaction times are computed by subtracting two OS-level timestamps rather than relying on continuous polling. Go's garbage collector can be disabled, greatly reducing the probability of unpredictable pauses that could corrupt stimulus timing. Finally, a set of over forty psychology experiments implemented in Goxpyriment are provided that promote not only learning by humans but also improve the ability of modern AI-assisted coding tools to help program experiments. The framework is released under the GNU General Public License v3 and is freely available at https://github.com/chrplr/goxpyriment.
0
0
q-bio.NC 2026-04-17

Ground-truth approximation yields 250-1000% better encoding evaluations

Robust Evaluation of Neural Encoding Models via ground-truth approximation

Canonical correlation analysis and participant averaging align MEEG signals with predictions for a more sensitive evaluation metric.

abstract click to expand
Encoding models enable measurement of how our brains represent sensory inputs using electro-and magneto-encephalography (MEEG). Evaluating how closely encoding models reflect the underlying brain functions is a crucial premise for model interpretation and hypothesis testing. However, the ground-truth neural activity is unknown, preventing model evaluation with respect to the target neural signal. Existing evaluation metrics must therefore relate model's predictions to noisy MEEG measurements, where most variance is stimulus-unrelated. Here, I introduce an evaluation framework where model predictions are compared to a ground-truth approximation, obtained by aligning MEEG signals with predictions using canonical correlation analysis and via participant averaging. The resulting metric (CPA-PA) yields single-participant evaluations outperforming conventional scores by 300-1000% on synthetic EEG data and 250% on 34 real MEEG datasets (818 datapoints). These gains reflect increased sensitivity to stimulus-relevant neural activity and reduced dependence on SNR, establishing ground-truth approximation as a robust framework for evaluating encoding models.
0
0
q-bio.NC 2026-04-16

Neurons signal via redox electronics

Neuronal electricality founded in murburn-thermodynamic principles: 2. Comparisons, evidenced explanations, and predictions

Murburn model predicts conduction velocity and waveforms from oxygen, redox balance, and transport rates, extending to cardiac and photorece

abstract click to expand
The analyses presented herein demonstrate that neuronal electrical activity can be consistently interpreted as a manifestation of murburn redox-mediated electronic dynamics rather than as a process fundamentally driven by transmembrane ionic flux. By integrating comparison with established models, quantitative predictions, and diverse experimental observations, the murburn framework emerges as a unified and chemically grounded description of excitability. A key strength of the model lies in its predictive structure. Unlike phenomenological frameworks that rely on parameter fitting, the murburn formulation links measurable electrophysiological outputs: such as conduction velocity, waveform morphology, and threshold behavior; to physically interpretable variables including redox kinetics, transport efficiency, and environmental conditions. This enables direct experimental validation through perturbations in oxygen availability, redox balance, solvent properties, ionic strength, and external fields. Importantly, the framework extends beyond neurons to a broader class of excitable systems, including cardiac tissue, photoreceptors, and artificial redox-active materials, suggesting that excitability is a general physicochemical phenomenon rooted in reaction-transport dynamics. While the present work establishes the mid-scale dynamics of neuronal electricality, further developments are required to connect quantum-level electron transfer processes with macroscopic electrophysiological signals such as EEG and EMG. These extensions, along with targeted experimental tests, will determine the ultimate scope and applicability of the murburn paradigm.
0
0
q-bio.NC 2026-04-16

Redox dynamics derive neuronal signals without ion pumps

Neuronal electricality founded in murburn-thermodynamic principles: 1. Background and basic theoretical formulation

A unified equation from local relaxation and thermodynamic transport captures resting state, spikes, and propagation.

abstract click to expand
Trans-membrane gradients and fluxes of cations (H+, Na+, K+, etc.) were deemed to be the rationale of electrical activities of aerobic cells/organelles, as per classical perceptions. Murburn concept (an umbrella of theorization based in stochastic redox processes) has afforded novel models for various metabolic, bioenergetic and electrophysiological outcomes. Herein, the foundational mechanistic formalisms for the electrical activities of neurons that lead signal relay along the axonal length are provided. Electron Holding potential (EHP), a dimensionless field/state variable (related logarithmically to electron chemical potential) is used to explain neuronal activity. By combining local redox relaxation dynamics with spatial transport driven by thermodynamic gradients, we derive a unified reaction-transport-relaxation equation that captures resting potential, excitability, waveform generation, and signal propagation within a single framework. Nonlinear local redox kinetics naturally give rise to threshold behavior, all-or-none responses, and stable spike waveforms. The framework accommodates known physiological variability and provides a direct bridge between metabolic/redox state and electrophysiological behavior. This work establishes a chemically grounded, non-circular alternative to ion-centric models and offers testable predictions for neuronal dynamics across biological systems. In the second part of this work, we compare the new theory with existing systems, provide further evidence, simulations and describe elaborate agendas for falsification and validation.
0
0
q-bio.NC 2026-04-16

Heterogeneous delays enable perfect recall in spiking neural networks

Working Memory in a Recurrent Spiking Neural Networks With Heterogeneous Synaptic Delays

Recurrent SNNs chain overlapping motifs of fixed length to store arbitrary spike sequences over 1000 steps.

Figure from the paper full image
abstract click to expand
Working memory -- the ability to store and recall precise temporal patterns of neural activity -- remains an open challenge for spiking neural networks (SNNs). We propose a recurrent SNN of $N$ neurons in which each synapse is equipped with $D = 41$ delays, modelled as a weight tensor $\mathbf{W} \in \mathbb{R}^{N \times N \times D}$ and trained end-to-end with surrogate-gradient backpropagation through time. The network stores $M$ arbitrary target spike patterns by representing each as a sequential chain of overlapping Spiking Motifs: contiguous windows of length $D$ that uniquely predict spikes at the next time step. On a synthetic benchmark of $M=16$ patterns ($N=512$ neurons, $T=1000$ steps), training achieves a mean F1 score of $1.0$, with recall emerging first near the clamped initialisation window and propagating forward in time. This result demonstrates that heterogeneous delays provide an efficient substrate for working memory in SNNs, enabling energy-efficient neuromorphic edge deployment.
0
0
q-bio.NC 2026-04-16

Latent alignment lifts mental imagery decoding from fMRI

Seeing the imagined: a latent functional alignment in visual imagery decoding from fMRI data

Mapping imagery signals into a perception model's conditioning space plus retrieval of similar trials raises semantic accuracy over frozen-1

Figure from the paper full image
abstract click to expand
Recent progress in visual brain decoding from fMRI has been enabled by large-scale datasets such as the Natural Scenes Dataset (NSD) and powerful diffusion-based generative models. While current pipelines are primarily optimized for perception, their performance under mental-imagery remains less well understood. In this work, we study how a state-of-the-art (SOTA) perception decoder (DynaDiff) can be adapted to reconstruct imagined content from the Imagery-NSD benchmark. We propose a latent functional alignment approach that maps imagery-evoked activity into the pretrained model's conditioning space, while keeping the remaining components frozen. To mitigate the limited amount of matched imagery-perception supervision, we further introduce a retrieval-based augmentation strategy that selects semantically related NSD perception trials. Across four subjects, latent functional alignment consistently improves high-level semantic reconstruction metrics relative to the frozen pretrained baseline and a voxel-space ridge alignment baseline, and enables above-chance decoding from multiple cortical regions. These results suggest that semantic structure learned from perception can be leveraged to stabilize and improve visual imagery decoding under out-of-distribution conditions.
0
0
q-bio.NC 2026-04-15

Brain dynamics encode vision beyond static snapshots

The illusory simplicity of the feedforward pass: evidence for the dynamical nature of stimulus encoding along the primate ventral stream

Recordings from monkey visual areas show information in neural activity evolves over the first 100ms, not just in fixed patterns.

abstract click to expand
In studying primate vision, a large body of work focuses on the first feedforward sweep. During this initial time window, information is thought to pass through ventral stream regions in a stage-like fashion in an effort to extract high-level information from the retinal input. Consequently, electrophysiological analyses commonly focus on spatial response patterns, either by averaging data in time, or by applying decoders in a temporally local fashion. By analysing data recorded simultaneously across multiple arrays placed along the macaque ventral stream, we here show that this prior approach may be missing key aspects of information encoding. First, time-resolved, multivariate analyses of information transfer between V4 and IT reveal temporally and semantically varied information content as being exchanged within the first 100ms of processing. Second, by employing recurrent neural network (RNN) decoding techniques that extend across the temporal domain, we demonstrate that the neural pattern dynamics themselves carry categorical information far beyond the spatially encoded information available at any given time point. These findings challenge the prevailing view of a single, stage-like feedforward process and suggest that even the earliest parts of visual processing are better characterised as a spatiotemporally evolving process that encodes information in its dynamics rather than purely spatial response patterns.
0
0
q-bio.NC 2026-04-14

Machine learning links fronto-parietal circuits to ADHD motivation

Machine learning approaches to uncover the neural mechanisms of motivated behaviour: from ADHD to individual differences in effort and reward sensitivity

EEG and MRI studies across three cohorts show these areas as potential biomarkers for diagnosis and targeted interventions in motivational

Figure from the paper full image
abstract click to expand
Motivated behaviour relies on the brain's capacity to evaluate effort and reward. Dysregulation within these processes contributes to a spectrum of conditions, from hyperactivity in attention-deficit/hyperactivity disorder (ADHD) to diminished goal-directed behaviour in apathy. This thesis investigates the neural mechanisms underlying ADHD using electroencephalography (EEG) and examines individual differences in effort and reward sensitivity using neuroimaging, applying machine learning approaches through three main studies. In Study 1, task-based and resting-state EEG were employed with machine learning models to classify adult individuals with ADHD and healthy controls. Machine learning classifiers trained on task-based EEG during a stop signal task outperformed those trained on resting-state EEG, with the strongest predictive features arising from gamma-band spectral power over fronto-central and parietal regions. In Study 2, diffusion MRI and whole-brain permutation-based analyses identified associations between white matter integrity and computationally modelled parameters reflecting effort and reward sensitivity, with SMA-connected tracts emerging as a central hub. In Study 3, grey matter volumes from structural T1-weighted MRI were used to examine correlates of effort sensitivity, reward sensitivity, and subclinical apathy, with machine learning confirming robust decoding of reward sensitivity and apathy levels. Across studies, fronto-parietal circuits emerged as central to effort valuation and reward processing. These findings may serve as neural biomarkers for improving diagnostic accuracy in ADHD and motivational impairments, and for guiding personalised neurotechnological interventions.
0
0
q-bio.NC 2026-04-14

IIT's Φ has never been computed for any real physical system

Integrated information theory: the good, the bad and the misunderstood

Critique shows the theory's key measure lacks a definition for actual matter and needs continuous fields to match fundamental physics.

Figure from the paper full image
abstract click to expand
The integrated information theory of consciousness (IIT) is uniquely ambitious in proposing a mathematical formula, derived from apparently fundamental properties of conscious experience, to describe the quantity and quality of consciousness for any physical system that possesses it. IIT has generated considerable debate, which has engendered some misunderstandings and misrepresentations. Here we address and hope to remedy this. We begin by concisely summarising the essentials of IIT. Given IIT is supposed to apply universally, we do this with reference to an arbitrary patch of matter, as opposed to the usual system of discrete computational units. Then, after briefly summarising IIT's theoretical and empirical achievements, we focus on five points which we consider especially important for driving forward new theory and increasing understanding. First, a high value of the measure $\Phi$ is not synonymous with `more consciousness'. We describe how $\Phi$ might be replaced with a suite of quantities to obtain a multi-dimensional characterisation of states of consciousness. Second, we describe with nuance the distinct flavour of panpsychism implied by IIT -- whereby space (and time) are tiled with substrates of (proto-) consciousness -- and find this is not problematic for the theory. Third, $\Phi$ is not well-defined for real physical systems, and has not been computed on any real physical system. Fourth, so far only proxies for IIT measures have been computed, and not approximations. Fifth, for IIT to fit with current successful theories in fundamental physics, a reformulation in terms of continuous fields would be needed.
0
0
q-bio.NC 2026-04-14

Mental fatigue impairs balance only with eyes open

Relationship between the level of mental fatigue induced by a prolonged cognitive task and the degree of balance disturbance

Individual differences in visual attention use explain why some people lose stability after the same cognitive task.

Figure from the paper full image
abstract click to expand
This study investigated the effects of mental fatigue (MF) induced by a 90-min AX-continuous performance test (AX-CPT) on balance control by addressing the issue of the heterogeneity of individuals' responses. Twenty healthy young active participants were recruited. They had to carry out two balance tasks (sway as little as possible on a stable support with the eyes open and closed) when standing on a force platform before and after performing a 90-min AX-CPT. The NASA-TLX test was used to assess the subjective manifestations of MF. Objective cognitive performance was measured using results from the AX-CPT. Inter-individual differences in behavioral deterioration due to MF were analyzed with a hierarchical cluster analysis, which categorizes participants' behaviors into subgroups with similar characteristics. The cluster analysis revealed that the achievement of the AX-CPT induced various levels of MF and balance impairments within the whole sample. A significant relationship between the level of MF and the degree of balance disturbance was observed only when participants stood with the eyes open, thus suggesting that inter-individual differences in vulnerability to MF could stem from differences between subjects in the level of engagement of visual attention and/or from differences in field dependency for balance control. These findings show that the completion of the same prolonged demanding cognitive task induces a strong heterogeneity in subjects' responses, with marked individual differences in MF vulnerability that affect balance control differently according to the sensory context.
0
0
q-bio.NC 2026-04-14

Brain signature for drug and food craving also tracks social craving

The Neurobiological Craving Signature (NCS) predicts social craving and responds to social isolation

The same fMRI pattern rises after social isolation as after fasting, indicating common circuits for different rewards.

Figure from the paper full image
abstract click to expand
Humans are inherently social and seek connection with others for survival. Recent studies suggest that acute social isolation leads to craving for social interactions, but the brain mechanisms of social craving and their relationship to brain networks underlying drug and food craving remain incompletely understood. Here we harnessed an existing dataset and tested whether the Neurobiological Craving Signature (NCS)-a recently developed fMRI-based brain-signature of drug and food craving-also predicts social craving. During fMRI, participants rated their craving for images of food, social interactions, and flowers in three different sessions: after 10h of fasting from food, 10h of social isolation, or neither (baseline; order of sessions counterbalanced). The NCS significantly predicted self-reported craving for food and social cues but not flower cues. Further, NCS responses to food were higher after fasting compared to baseline, and higher for social cues after social isolation compared to baseline, demonstrating its responsiveness to both food and social deprivation. These findings resonate with recent work showing shared brainstem circuits for hunger and social isolation, and indicate shared whole-brain circuits for social, food, and drug craving. They open new avenues for testing the NCS across different primary rewards, for assessing the consequences of their deprivation, and for examining how social deprivation-such as loneliness and isolation-interacts with overeating and drug use.
0
0
q-bio.NC 2026-04-14

Flow matching model predicts future brain activity more accurately

Probabilistic Prediction of Neural Dynamics via Autoregressive Flow Matching

Conditioning on past neural states and sensory input improves short-term forecasts of blood oxygenation changes across the cortex.

Figure from the paper full image
abstract click to expand
Forecasting neural activity in response to naturalistic stimuli remains a key challenge for understanding brain dynamics and enabling downstream neurotechnological applications. Here, we introduce a generative forecasting framework for modeling neural dynamics based on autoregressive flow matching (AFM). Building on recent advances in transport-based generative modeling, our approach probabilistically predicts neural responses at scale from multimodal sensory input. Specifically, we learn the conditional distribution of future neural activity given past neural dynamics and concurrent sensory input, explicitly modeling neural activity as a temporally evolving process in which future states depend on recent neural history. We evaluate our framework on the Algonauts project 2025 challenge functional magnetic resonance imaging dataset using subject-specific models. AFM significantly outperforms both a non-autoregressive flow-matching baseline and the official challenge general linear model baseline in predicting short-term parcel-wise blood oxygenation level-dependent (BOLD) activity, demonstrating improved generalization and widespread cortical prediction performance. Ablation analyses show that access to past BOLD dynamics is a dominant driver of performance, while autoregressive factorization yields consistent, modest gains under short-horizon, context-rich conditions. Together, these findings position autoregressive flow-based generative modeling as an effective approach for short-term probabilistic forecasting of neural dynamics with promising applications in closed-loop neurotechnology.
0
0
q-bio.NC 2026-04-13

Neural states relax in warped spaces to form input-output mappings

Relaxing in Warped Spaces: Generalized Hierarchical and Modular Dynamical Neural Network

Periodic inputs create Lissajous-style relations between subspaces while association mode reveals a certainty-uncertainty trade-off in the n

Figure from the paper full image
abstract click to expand
We propose a dynamical neural network model with a hierarchical and modular structure. The network architecture can be derived by minimizing an energy function that is originally designed based on two kinds of neurons with quite different time constants. It has multiple subspaces that are spanned by neural parameters employed in the energy function, and adjacent subspaces are related to each other with a layered internetwork. Each internetwork further consists of a pair of a forward subnet and a backward one, and signals flowing through these subnets determine total dynamics of the network. The model can operate in either a learning or an association mode. In the learning mode, when periodic signals equivalent to repetitive neuronal bursting are suitably applied to input ports in all subspaces, mapping relationships corresponding to those input signals are eventually formed in internetworks between subspaces. Various two-dimensional mapping relationships between subspaces can be shaped by employing an appropriate set of periodic input signals with different frequencies based on the same mechanism as a Lissajous curve. The model in the association mode provides an overall framework such that state variables inside the network individually relax in warped spaces, each of which has been designed as favorable for a (or some) state variable(s). The association mode is further classified into two modes; unconstrained and constrained. In the latter mode, for instance, when a sufficiently slow periodic trajectory is set as an input, a warped output trajectory appears in each subspace as if imaginary layered networks with the inverse mapping relationships to existing forward subnets' were located hierarchically from outside to inside. These results suggest that a certainty/uncertainty relation exists between an input trajectory and an output trajectory.
0
0
q-bio.NC 2026-04-13

Astrocytic diffusion stabilizes neural activity bumps

Astrocytic resource diffusion stabilizes persistent activity in neural fields

Resource smoothing and replenishment suppress drift and widen the stable parameter range for stationary persistent activity.

Figure from the paper full image
abstract click to expand
Persistent neural activity underlying working memory requires sustained synaptic transmission, yet the metabolic and neurotransmitter support provided by astrocyte networks is largely absent from spatially extended neural circuit models. We introduce a coupled astrocyte-neural field model in which synaptic efficacy is regulated by depletion and recovery of a conserved resource pool recycled and spatially redistributed through diffusively coupled astrocytes. We obtain explicit stationary bump profiles and self-consistency conditions for bump width and amplitude on a canonical ring architecture. Linearizing about these solutions while carefully accounting for perturbations at bump boundaries, we analyze the resulting spectral problem governing stability. Our analysis, supported by numerical simulations and low-dimensional Fourier truncations, reveals a two-stage stabilization mechanism: astrocytic diffusion smooths resource asymmetries created by small bump displacements, and synaptic replenishment transfers this smoothing back to the synaptic pool. Together, sufficiently strong diffusion and replenishment suppress drift instabilities and enlarge the parameter regime in which stationary bumps persist.
0
0
q-bio.NC 2026-04-13

AI general intelligence peaks then declines with specialization

The Rise and Fall of G in AGI

Benchmark correlations show the dominant G-factor weakening after 2023 as reasoning-specialized models arrive.

Figure from the paper full image
abstract click to expand
In the psychological literature the term `general intelligence' describes correlations between abilities and not simply the number of abilities. This paper connects Spearman's $g$-factor from psychometrics, measuring a positive manifold, to the implicit ``$G$-factor'' in claims about artificial general intelligence (AGI) performance on temporally structured benchmarks. By treating LLM benchmark batteries as cognitive test batteries and model releases as subjects, principal component analysis is applied to a models $\times$ benchmarks $\times$ time matrix spanning 39 models (2019--2025) and 14 benchmarks. Preliminary results confirm a strong positive manifold in which all 28 pairwise correlations positive across 8 benchmarks. By analyzing the spectrum of the benchmark correlation through time, PC1 explains 90\% of variance on a 5-benchmark core battery ($n=19$)) reducing to 77\% by 2024. On a four benchmark battery, PC1 is found to peak at 92\% of the variance between 2023--2024 and reduce to 64\% with the arrival of reasoning-specialized models in 2024. This is coincident with a rotation in the G-factor as models outsource `reasoning' to tools. The analysis of partial correlation matrices through time provides evidence for the evolution of specialization beneath the positive manifold of general intelligence (AI-hedgehog) encompassing diverse high dimensional problem solving systems (AI-foxes). In strictly psychometric terms, AI models exhibit general intelligence suppressing specialized intelligences. LLMs invert the ideal of substituting complicated models with parsimonious mechanisms, a `Ptolemaic Succession' of theories, with architectures of increasing hierarchical complication and capability.
0
0
q-bio.NC 2026-04-09

New toolbox runs full EEG and MEG pipeline in one GUI

MLE-Toolbox: An Open-Source Toolbox for Comprehensive EEG and MEG Data Analysis

Integrates import, artifact removal, source localization, connectivity, and ML classification with links to Brainstorm and FieldTrip.

Figure from the paper full image
abstract click to expand
MLE-Toolbox is a comprehensive open-source MATLAB toolbox for end-to-end analysis of magnetoencephalography (MEG) and electroencephalography (EEG) data. Inspired by widely used neuroimaging platforms such as Brainstorm and FieldTrip, it integrates the full analysis pipeline within a unified and user-friendly graphical interface (GUI), covering raw data import, preprocessing, source localization, functional connectivity, oscillatory analysis, and machine learning-based classification. The toolbox includes automated artifact rejection methods, including independent component analysis (ICA), signal-space projection (SSP), and signal-space separation (SSS); multiple source localization approaches, including minimum norm estimation (MNE), dynamic statistical parametric mapping (dSPM), standardized low-resolution brain electromagnetic tomography (sLORETA), and beamforming; multi-atlas parcellation with anatomical visualization; spectral power analysis with frequency-band brain mapping; phase-amplitude coupling (PAC); graph-theoretic brain network analysis; and integrated machine learning and deep learning classifiers. MLE-Toolbox also provides native interoperability with Brainstorm, FieldTrip, EEGLAB, and FreeSurfer, allowing researchers to build on established workflows while benefiting from additional automation, interactive visualization, and one-click academic report generation. Freely available for non-commercial use, MLE-Toolbox is designed to lower the barrier to rigorous, reproducible MEG/EEG research.
0
0
q-bio.NC 2026-04-09

Classical instruments reproduce all sequential cognitive effects

Quantum-like Cognition in Process Theories: An Analysis

Any sequence of decisions fits a general classical model, so quantum accounts are unnecessary until parallel joint decisions are tested for

abstract click to expand
Various effects in human cognition, often considered `non-classical', have been argued to be most naturally modelled by quantum-like models of decision making. We extend this approach to describe models of cognition and decision-making in general probabilistic process theories, which include both classical probabilistic models and quantum instrument models as special cases. We show how many aspects of quantum-like cognition can be described diagrammatically in process theories, before using our approach to assess the arguments for quantum-like models. While standard Bayesian classical models are insufficient, we prove that any sequential decision data can in fact be given a more general form of classical instrument model, and see that even simple deterministic models can exhibit all cognitive effects. Restricting attention to instruments induced by measurements, such as classical Bayesian and quantum POVM models, rules out such a result, but is challenged by the fact that such instruments cannot account for certain effects. Finally, we argue that to strictly rule out classical instrument models one should make use of parallel composition in the modelling of joint decisions, and find real world cognitive data violating Bell inequalities.
0
0
q-bio.NC 2026-04-06 1 theorem

Post-training tunes LLMs to match creative brain patterns

Large Language Models Align with the Human Brain during Creative Thinking

Creativity-focused models preserve alignment with high-originality neural responses while reasoning training shifts representations toward分析

Figure from the paper full image
abstract click to expand
Creative thinking is a fundamental aspect of human cognition, and divergent thinking-the capacity to generate novel and varied ideas-is widely regarded as its core generative engine. Large language models (LLMs) have recently demonstrated impressive performance on divergent thinking tests and prior work has shown that models with higher task performance tend to be more aligned to human brain activity. However, existing brain-LLM alignment studies have focused on passive, non-creative tasks. Here, we explore brain alignment during creative thinking using fMRI data from 170 participants performing the Alternate Uses Task (AUT). We extract representations from LLMs varying in size (270M-72B) and measure alignment to brain responses via Representational Similarity Analysis (RSA), targeting the creativity-related default mode and frontoparietal networks. We find that brain-LLM alignment scales with model size (default mode network only) and idea originality (both networks), with effects strongest early in the creative process. We further show that post-training objectives shape alignment in functionally selective ways: a creativity-optimized \texttt{Llama-3.1-8B-Instruct} preserves alignment with high-creativity neural responses while reducing alignment with low-creativity ones; a human behavior fine-tuned model elevates alignment with both; and a reasoning-trained variant shows the opposite pattern, suggesting chain-of-thought training steers representations away from creative neural geometry toward analytical processing. These results demonstrate that post-training objectives selectively reshape LLM representations relative to the neural geometry of human creative thought.
0
0
q-bio.NC 2026-04-06 2 theorems

Brain patches multiplex phonemes

Temporal structure of the language hierarchy within small cortical patches

Tiny cortical areas hold successive speech units together by shifting their neural code rather than by spatial separation.

abstract click to expand
Speech production requires the rapid coordination of a complex hierarchy of linguistic units, transforming a semantic representation into a precise sequence of articulatory movements. To unravel the neural mechanisms underlying this feat, we leverage recordings from eight 3.2 x 3.2 mm 64-microelectrode arrays implanted in the motor cortex and inferior frontal gyrus of two patients tasked to produce twenty thousand sentences. We show that a hierarchy of linguistic features are robustly encoded in most of these small cortical patches. Contrary to our expectations, instead of a clear macroscopic organization between patches, we observe a multiplexing of phonetic, syllabic and lexical representations within each cortical patch. Critically, this coding scheme dynamically changes over time to allow successive phonemes, syllables and words to be simultaneously represented without interference. Overall, these results, reminiscent of position encoding in transformers, show how small cortical patches organize the unfolding of the speech hierarchy during language production.
0
0
q-bio.NC 2026-04-06 2 theorems

Pretrained models plus brain geometry recover intracranial detail in scalp EEG

Bridging scalp and intracranial EEG in BCI via pretrained neural representations and geometric constraint embedding

By mapping cortical anatomy to propagation constraints, the method synthesizes enhanced signals that restore patterns lost on the way to the

abstract click to expand
Electroencephalography (EEG) has become one of the key modalities underpinning brain-computer interfaces (BCIs) due to its high temporal resolution, rapid responsiveness, non-invasiveness, low cost, and portability. However, EEG signals are substantially inferior to intracranial EEG (iEEG) in signal-to-noise ratio and local spatial resolution, whereas iEEG suffers from extremely limited clinical accessibility owing to its invasive nature, hindering widespread application. To address this challenge, this study proposes a unified data-and prior knowledge-driven framework for EEG-iEEG representational enhancement. Guided by the principle that "geometric structure dictates function", the framework maps static cortical anatomy onto dynamic constraints governing neural signal propagation and integrates general-purpose neural representations extracted by a pre-trained large EEG model to explicitly model signal transmission through the brain. Enhanced EEG signals are then synthesized via a multidimensional representation diffusion process. Numerous experimental results demonstrate that the generated enhanced EEG signals effectively recover the neural activity patterns lost during propagation through the brain. This finding indicates that the performance ceiling of BCIs is constrained not only by acquisition hardware but also by the depth to which the generative model resolves the mechanisms of neural signal propagation. Collectively, the proposed framework provides a viable pathway toward acquiring high-fidelity neural signals at low cost.
0

browse all of q-bio.NC → full archive · search · sub-categories