A mixture model with adaptive KDE and per-image cross-validation raises estimated human fixation consistency by 5-15% median log-likelihood and up to 2 AUC points over fixed-bandwidth Gaussian baselines.
hub
URL https://www.frontiersin.org/journals/ systems-neuroscience/articles/10.3389/neuro.06.004.2008/full
8 Pith papers cite this work, alongside 1,285 external citations. Polarity classification is still indexing.
hub tools
years
2026 8verdicts
UNVERDICTED 8representative citing papers
MTA improves LLM knowledge distillation by aligning representations along layer-wise trajectories with adaptive granularity from words to phrases using dynamic structural and hidden representation alignment losses.
Heterogeneous visual agents form shared symbols via decentralized Metropolis-Hastings captioning, where encoder similarity shapes the content and symmetry of the resulting language.
Decoding alignment metrics can remain high and unchanged even when encoding manifold topology is causally altered, so they do not imply similar function or computation across neural populations.
LLMs exhibit Bayesian-like hypothesis updating with strong-sampling bias and an evaluation-generation gap but generalize poorly outside observed data.
Alignment pattern analysis reveals that models aligned to individual brain ROIs do not reproduce the stable cross-region alignment profiles observed across human subjects.
Language models encode concept hierarchies as linear transformations that are domain-specific yet structurally similar across domains.
SRA reframes cross-tokenizer LLM distillation as alignment of attention-weighted span centers of mass in a multi-particle dynamical system and reports consistent gains over prior CTKD baselines.
citing papers explorer
-
Raising the Ceiling: Better Empirical Fixation Densities for Saliency Benchmarking
A mixture model with adaptive KDE and per-image cross-validation raises estimated human fixation consistency by 5-15% median log-likelihood and up to 2 AUC points over fixed-bandwidth Gaussian baselines.
-
Emergent Communication between Heterogeneous Visual Agents through Decentralized Learning
Heterogeneous visual agents form shared symbols via decentralized Metropolis-Hastings captioning, where encoder similarity shapes the content and symmetry of the resulting language.