STRIDE injects distilled LLM reasoning as continuous cross-modal priors into TSFMs via mean-pooled hidden states, achieving SOTA forecasting (0.674 MASE, 0.454 CRPS) on GIFT-Eval and superior reasoning on TFRBench.
Title resolution pending
5 Pith papers cite this work. Polarity classification is still indexing.
years
2026 5verdicts
UNVERDICTED 5representative citing papers
SCURank ranks multiple summary candidates with Summary Content Units to outperform ROUGE and LLM-based methods in summarization distillation.
Skill-SD turns an agent's completed trajectories into dynamic natural-language skills that condition only the teacher in self-distillation, yielding 14-42% gains over RL and OPSD baselines on multi-turn agent benchmarks.
OmniThoughtVis curates 1.8M multimodal CoT samples via teacher distillation, difficulty annotation, and tag-based sampling, yielding consistent gains on nine reasoning benchmarks and allowing 4B models to match or beat undistilled 8B baselines.
A neuro-symbolic system converts legal clauses into deterministic typed graphs for consistent, auditable adjudication that cuts compute costs by over 90% versus direct large reasoning model use.
citing papers explorer
-
Reasoning-Aware Training for Time Series Forecasting
STRIDE injects distilled LLM reasoning as continuous cross-modal priors into TSFMs via mean-pooled hidden states, achieving SOTA forecasting (0.674 MASE, 0.454 CRPS) on GIFT-Eval and superior reasoning on TFRBench.
-
SCURank: Ranking Multiple Candidate Summaries with Summary Content Units for Enhanced Summarization
SCURank ranks multiple summary candidates with Summary Content Units to outperform ROUGE and LLM-based methods in summarization distillation.
-
Skill-SD: Skill-Conditioned Self-Distillation for Multi-turn LLM Agents
Skill-SD turns an agent's completed trajectories into dynamic natural-language skills that condition only the teacher in self-distillation, yielding 14-42% gains over RL and OPSD baselines on multi-turn agent benchmarks.
-
OmniThoughtVis: A Scalable Distillation Pipeline for Deployable Multimodal Reasoning Models
OmniThoughtVis curates 1.8M multimodal CoT samples via teacher distillation, difficulty annotation, and tag-based sampling, yielding consistent gains on nine reasoning benchmarks and allowing 4B models to match or beat undistilled 8B baselines.
-
Accurate Legal Reasoning at Scale: Neuro-Symbolic Offloading and Structural Auditability for Robust Legal Adjudication
A neuro-symbolic system converts legal clauses into deterministic typed graphs for consistent, auditable adjudication that cuts compute costs by over 90% versus direct large reasoning model use.