A single end-to-end Transformer model unifies stellar labels from heterogeneous spectroscopic surveys into a self-consistent scale without post-hoc recalibration.
Title resolution pending
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
years
2026 2verdicts
UNVERDICTED 2representative citing papers
Stochastic Attention adds calibrated uncertainty to transformer foundation models through inference-time multinomial sampling of attention weights and univariate post-hoc tuning of a concentration parameter.
citing papers explorer
-
Homogeneous Stellar Parameters from Heterogeneous Spectra with Deep Learning
A single end-to-end Transformer model unifies stellar labels from heterogeneous spectroscopic surveys into a self-consistent scale without post-hoc recalibration.
-
Calibrating Scientific Foundation Models with Inference-Time Stochastic Attention
Stochastic Attention adds calibrated uncertainty to transformer foundation models through inference-time multinomial sampling of attention weights and univariate post-hoc tuning of a concentration parameter.