pith. machine review for the scientific record. sign in

arxiv: 2601.04237 · v2 · submitted 2026-01-04 · 💻 cs.AI · cs.CL· cs.LG

Recognition: unknown

SAGE-32B: Agentic Reasoning via Iterative Distillation

Authors on Pith no claims yet
classification 💻 cs.AI cs.CLcs.LG
keywords reasoningsage-32bagenticmodeldistillationiterativemodelsplanning
0
0 comments X
read the original abstract

We demonstrate SAGE-32B, a 32 billion parameter language model that focuses on agentic reasoning and long range planning tasks. Unlike chat models that aim for general conversation fluency, SAGE-32B is designed to operate in an agentic loop, emphasizing task decomposition, tool usage, and error recovery. The model is initialized from the Qwen2.5-32B pretrained model and fine tuned using Iterative Distillation, a two stage training process that improves reasoning performance through rigorously tested feedback loops. SAGE-32B also introduces an inverse reasoning approach, which uses a meta cognition head to forecast potential failures in the planning process before execution. On agentic reasoning benchmarks including MMLU-Pro, AgentBench, and MATH-500, SAGE-32B achieves higher success rates in multi tool usage scenarios compared to similarly sized baseline models, while remaining competitive on standard reasoning evaluations. Model weights are publicly released at https://huggingface.co/sagea-ai/sage-reasoning-32b

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. SAGE Celer 2.6 Technical Card

    cs.CL 2026-03 unverdicted novelty 2.0

    SAGE Celer 2.6 is a new line of language models with inverse reasoning training, integrated vision, and strong performance on math, coding, and South Asian language benchmarks.