pith. machine review for the scientific record. sign in

arxiv: 2512.16762 · v3 · submitted 2025-12-18 · 💻 cs.LG

Recognition: unknown

NRGPT: An Energy-based Alternative for GPT

Authors on Pith no claims yet
classification 💻 cs.LG
keywords languagemodelingenergyenergy-basedexplorationinferencelandscapemodel
0
0 comments X
read the original abstract

Generative Pre-trained Transformer (GPT) architectures are the most popular design for language modeling. Energy-based modeling is a different paradigm that views inference as a dynamical process operating on an energy landscape. We propose a minimal modification of the GPT setting to unify it with the EBM framework. The inference step of our model, which we call eNeRgy-GPT (NRGPT), is conceptualized as an exploration of the tokens on the energy landscape. We prove, and verify empirically, that under certain circumstances this exploration becomes gradient descent, although they don't necessarily lead to the best performing models. We demonstrate that our model performs well for simple language (Shakespeare dataset), algebraic ListOPS tasks, and richer settings such as OpenWebText language modeling. We also observe that our models may be more resistant to overfitting, doing so only during very long training.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Hyperparameter Transfer for Dense Associative Memories

    cs.LG 2026-05 unverdicted novelty 7.0

    Explicit scaling prescriptions for hyperparameters in DenseAMs are derived from model dynamics and shown to match empirical results across scales.

  2. Revisiting Transformer Layer Parameterization Through Causal Energy Minimization

    cs.LG 2026-05 unverdicted novelty 6.0

    CEM recasts Transformer layers as energy minimization steps, enabling constrained parameterizations like weight sharing and low-rank interactions that match standard baselines in 100M-scale language modeling.