pith. machine review for the scientific record. sign in

arxiv: 1711.05101 · v3 · submitted 2017-11-14 · 💻 cs.LG · cs.NE· math.OC

Recognition: unknown

Decoupled Weight Decay Regularization

Frank Hutter, Ilya Loshchilov

classification 💻 cs.LG cs.NEmath.OC
keywords decayweightregularizationadamalgorithmsdecoupledemphgradient
0
0 comments X
read the original abstract

L$_2$ regularization and weight decay regularization are equivalent for standard stochastic gradient descent (when rescaled by the learning rate), but as we demonstrate this is \emph{not} the case for adaptive gradient algorithms, such as Adam. While common implementations of these algorithms employ L$_2$ regularization (often calling it "weight decay" in what may be misleading due to the inequivalence we expose), we propose a simple modification to recover the original formulation of weight decay regularization by \emph{decoupling} the weight decay from the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). Our proposed decoupled weight decay has already been adopted by many researchers, and the community has implemented it in TensorFlow and PyTorch; the complete source code for our experiments is available at https://github.com/loshchil/AdamW-and-SGDW

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 60 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Rigel3D: Rig-aware Latents for Animation-Ready 3D Asset Generation

    cs.GR 2026-05 unverdicted novelty 8.0

    Rigel3D jointly generates rigged 3D meshes with geometry, skeleton topology, joint positions, and skinning weights using coupled surface and skeleton latent representations for image-conditioned animation-ready asset ...

  2. Online Learning-to-Defer with Varying Experts

    stat.ML 2026-05 unverdicted novelty 8.0

    Presents the first online learning-to-defer algorithm with regret bounds O((n + n_e) T^{2/3}) generally and O((n + n_e) sqrt(T)) under low noise for multiclass classification with varying experts.

  3. Dissecting Jet-Tagger Through Mechanistic Interpretability

    hep-ph 2026-05 accept novelty 8.0

    A Particle Transformer jet tagger contains a sparse six-head circuit whose source-relay-readout structure recovers most performance and whose residual stream preferentially encodes 2-prong energy correlators.

  4. LLM Translation of Compiler Intermediate Representation

    cs.PL 2026-05 unverdicted novelty 8.0

    IRIS-14B is the first LLM trained explicitly for GIMPLE-to-LLVM IR translation and outperforms much larger models by up to 44 percentage points on real-world C code.

  5. CADFS: A Big CAD Program Dataset and Framework for Computer-Aided Design with Large Language Models

    cs.CV 2026-05 unverdicted novelty 8.0

    CADFS supplies a large real-world CAD dataset and FeatureScript representation that, after VLM fine-tuning, produces more accurate and feature-rich designs than prior generative CAD systems.

  6. Stability and Generalization in Looped Transformers

    cs.LG 2026-04 unverdicted novelty 8.0

    Looped transformers with recall and outer normalization produce reachable, input-dependent fixed points with stable gradients, enabling generalization, while those without recall cannot; a new internal recall variant ...

  7. CLAD: Efficient Log Anomaly Detection Directly on Compressed Representations

    cs.LG 2026-04 unverdicted novelty 8.0

    CLAD is the first deep learning framework for log anomaly detection that operates directly on compressed byte streams using a dilated convolutional encoder, hybrid Transformer-mLSTM, and two-stage training, achieving ...

  8. Large Language Diffusion Models

    cs.CL 2025-02 unverdicted novelty 8.0

    LLaDA is a scalable diffusion-based language model that matches autoregressive LLMs like LLaMA3 8B on tasks and surpasses GPT-4o on reversal poem completion.

  9. Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow

    cs.LG 2022-09 unverdicted novelty 8.0

    Rectified flow learns straight-path neural ODEs for distribution transport, yielding efficient generative models and domain transfers that work well even with a single simulation step.

  10. RoFormer: Enhanced Transformer with Rotary Position Embedding

    cs.CL 2021-04 accept novelty 8.0

    RoFormer introduces rotary position embeddings that encode absolute positions via rotation matrices and relative dependencies in attention, outperforming prior position methods on long text classification tasks.

  11. Language Models are Few-Shot Learners

    cs.CL 2020-05 accept novelty 8.0

    GPT-3 shows that scaling an autoregressive language model to 175 billion parameters enables strong few-shot performance across diverse NLP tasks via in-context prompting without fine-tuning.

  12. Very Efficient Listwise Multimodal Reranking for Long Documents

    cs.IR 2026-05 unverdicted novelty 7.0

    ZipRerank delivers state-of-the-art multimodal listwise reranking accuracy for long documents at up to 10x lower latency via early interaction and single-pass scoring.

  13. Gradient Clipping Beyond Vector Norms: A Spectral Approach for Matrix-Valued Parameters

    cs.LG 2026-05 unverdicted novelty 7.0

    Spectral clipping of leading singular values in gradient matrices stabilizes SGD for non-convex problems with heavy-tailed noise and achieves the optimal convergence rate O(K^{(2-2α)/(3α-2)}).

  14. Block-R1: Rethinking the Role of Block Size in Multi-domain Reinforcement Learning for Diffusion Large Language Models

    cs.LG 2026-05 unverdicted novelty 7.0

    Block-R1 formulates domain block size conflicts in multi-domain RL for dLLMs, releases a 41K-sample dataset with per-sample best block sizes and a conflict score, and provides a benchmark plus simple cross-domain trai...

  15. PointForward: Feedforward Driving Reconstruction through Point-Aligned Representations

    cs.CV 2026-05 unverdicted novelty 7.0

    PointForward uses sparse world-space 3D queries and scene graphs to deliver consistent single-pass reconstruction of dynamic driving scenes via point-aligned representations.

  16. Relative Score Policy Optimization for Diffusion Language Models

    cs.CL 2026-05 unverdicted novelty 7.0

    RSPO interprets reward advantages as targets for relative log-ratios in dLLMs, calibrating noisy estimates to stabilize RLVR training and achieve strong gains on planning tasks with competitive math reasoning performance.

  17. Inverse Design of Metainterfaces for Static Friction Control: Beyond the Hertzian Limit

    cond-mat.soft 2026-05 unverdicted novelty 7.0

    A differentiable physics engine inside a neural network discovers non-Hertzian asperity shapes that produce programmable nonlinear friction-area relations, validated by BEM simulations.

  18. Positional LSH: Binary Block Matrix Approximation for Attention with Linear Biases

    cs.LG 2026-05 unverdicted novelty 7.0

    ALiBi bias is the expectation of positional LSH-induced block masks, yielding spectral and max-norm approximation bounds that reduce long-context biased attention to randomized short-context unbiased attention.

  19. Remix the Timbre: Diffusion-Based Style Transfer Across Polyphonic Stems

    cs.SD 2026-05 unverdicted novelty 7.0

    MixtureTT performs direct per-stem timbre transfer on polyphonic mixtures via a shared diffusion transformer, outperforming single-stem baselines on SATB choral data while eliminating cascaded separation errors.

  20. Reddit2Deezer: A Scalable Dataset for Real-World Grounded Conversational Music Recommendation

    cs.IR 2026-05 unverdicted novelty 7.0

    Reddit2Deezer supplies 190k authentic Reddit dialogues grounded in Deezer music entities for scalable conversational music recommendation research.

  21. Unified Modeling of Lane and Lane Topology for Driving Scene Reasoning

    cs.CV 2026-05 unverdicted novelty 7.0

    UniTopo unifies lane detection and topology reasoning into a single perception model, outperforming prior methods on OpenLane-V2 benchmarks with TOP_ll scores of 30.1% and 31.8%.

  22. From Holo Pockets to Electron Density: GPT-style Drug Design with Density

    cs.AI 2026-05 unverdicted novelty 7.0

    EDMolGPT generates drug-like molecules from low-resolution electron density point clouds of holo binding pockets and shows effectiveness across 101 biological targets.

  23. From Articulated Kinematics to Routed Visual Control for Action-Conditioned Surgical Video Generation

    cs.CV 2026-05 unverdicted novelty 7.0

    A kinematic-to-visual lifting paradigm combined with hierarchically routed control generates action-conditioned surgical videos with better faithfulness, fidelity, and efficiency.

  24. NeuralBench: A Unifying Framework to Benchmark NeuroAI Models

    cs.LG 2026-05 conditional novelty 7.0

    NeuralBench is a new benchmarking framework for neuroAI models on EEG data that finds foundation models only marginally outperform task-specific ones while many tasks like cognitive decoding stay highly challenging.

  25. Queryable LoRA: Instruction-Regularized Routing Over Shared Low-Rank Update Atoms

    cs.LG 2026-05 unverdicted novelty 7.0

    Queryable LoRA adds dynamic routing over shared low-rank atoms with attention and language-instruction regularization to make parameter-efficient fine-tuning more adaptive across inputs and layers.

  26. Fast Byte Latent Transformer

    cs.CL 2026-05 unverdicted novelty 7.0

    BLT-D, BLT-S, and BLT-DV use block-wise diffusion training and speculative verification to enable parallel byte generation in byte-level LMs, cutting memory-bandwidth cost by over 50%.

  27. MatryoshkaLoRA: Learning Accurate Hierarchical Low-Rank Representations for LLM Fine-Tuning

    cs.CL 2026-05 unverdicted novelty 7.0

    MatryoshkaLoRA inserts a crafted diagonal matrix P into LoRA to learn accurate nested low-rank adapters that support dynamic rank selection with minimal performance drop.

  28. SAM 3D Animal: Promptable Animal 3D Reconstruction from Images in the Wild

    cs.CV 2026-05 unverdicted novelty 7.0

    SAM 3D Animal is the first promptable framework for multi-animal 3D reconstruction from single images, built on SMAL+ and trained on the new Herd3D dataset, achieving SOTA results on Animal3D, APTv2, and Animal Kingdo...

  29. Structured Role-Aware Policy Optimization for Multimodal Reasoning

    cs.AI 2026-05 unverdicted novelty 7.0

    SRPO refines GRPO into role-aware token-level advantages by emphasizing perception tokens based on visual dependency (original vs. corrupted inputs) and reasoning tokens based on consistency with perception, unified v...

  30. Beyond LoRA vs. Full Fine-Tuning: Gradient-Guided Optimizer Routing for LLM Adaptation

    cs.CL 2026-05 unverdicted novelty 7.0

    MoLF routes updates between full fine-tuning and LoRA at the optimizer level to match or exceed the better of either static method, with an efficient LoRA-only variant outperforming prior adaptive approaches.

  31. VITA-QinYu: Expressive Spoken Language Model for Role-Playing and Singing

    cs.CL 2026-05 unverdicted novelty 7.0

    VITA-QinYu is the first expressive end-to-end spoken language model supporting role-playing and singing alongside conversation, trained on 15.8K hours of data and outperforming prior models on expressiveness and conve...

  32. The Interplay of Data Structure and Imbalance in the Learning Dynamics of Diffusion Models

    stat.ML 2026-05 unverdicted novelty 7.0

    Higher-variance classes are learned first in diffusion models; strong class imbalance reverses the order and imposes distinct delayed learning times on minority classes.

  33. Layer Collapse in Diffusion Language Models

    cs.LG 2026-05 unverdicted novelty 7.0

    Diffusion language models develop early-layer collapse around an indispensable super-outlier due to overtraining, resulting in higher compressibility and reversed optimal sparsity patterns versus autoregressive models.

  34. Layer Collapse in Diffusion Language Models

    cs.LG 2026-05 conditional novelty 7.0

    Early layers in diffusion language models like LLaDA-8B collapse into redundant representations around a critical super-outlier activation due to overtraining, making them more robust to quantization and sparsity than...

  35. Eulerian Motion Guidance: Robust Image Animation via Bidirectional Geometric Consistency

    cs.CV 2026-05 unverdicted novelty 7.0

    Eulerian adjacent-frame motion guidance plus bidirectional geometric consistency improves training speed, temporal coherence, and artifact reduction in diffusion-based image animation.

  36. Eulerian Motion Guidance: Robust Image Animation via Bidirectional Geometric Consistency

    cs.CV 2026-05 unverdicted novelty 7.0

    Eulerian adjacent-frame motion fields with bidirectional cycle consistency checks enable faster parallel training and fewer artifacts in diffusion model image animation compared to initial-frame Lagrangian guidance.

  37. Autoregressive Visual Generation Needs a Prologue

    cs.CV 2026-05 unverdicted novelty 7.0

    Prologue introduces dedicated prologue tokens to decouple generation and reconstruction in AR visual models, significantly improving generation FID scores on ImageNet while maintaining reconstruction quality.

  38. Navigating by Old Maps: The Pitfalls of Static Mechanistic Localization in LLM Post-Training

    cs.CL 2026-05 unverdicted novelty 7.0

    Transformer circuits show free evolution during SFT, rendering static mechanistic localization inadequate for future parameter updates due to inherent temporal latency.

  39. TFM-Retouche: A Lightweight Input-Space Adapter for Tabular Foundation Models

    cs.LG 2026-05 unverdicted novelty 7.0

    TFM-Retouche is an architecture-agnostic input-space residual adapter that improves tabular foundation model accuracy on 51 datasets by learning input corrections through the frozen backbone, with an identity guard to...

  40. MotionGRPO: Overcoming Low Intra-Group Diversity in GRPO-Based Egocentric Motion Recovery

    cs.CV 2026-05 unverdicted novelty 7.0

    MotionGRPO models diffusion sampling as a Markov decision process optimized with Group Relative Policy Optimization, using hybrid rewards and noise injection to boost sample diversity and local joint precision in egoc...

  41. Geometry-Aware State Space Model: A New Paradigm for Whole-Slide Image Representation

    cs.CV 2026-05 unverdicted novelty 7.0

    BatMIL uses hybrid hyperbolic-Euclidean geometry, an S4 state-space backbone, and chunk-level mixture-of-experts to outperform prior multiple-instance learning methods on seven whole-slide image datasets across six cancers.

  42. HNC: Leveraging Hard Negative Captions towards Models with Fine-Grained Visual-Linguistic Comprehension Capabilities

    cs.CL 2026-05 unverdicted novelty 7.0

    Training on automatically generated hard negative captions improves vision-language models' zero-shot detection of fine-grained image-text mismatches and robustness to noisy inputs.

  43. A foundation model of vision, audition, and language for in-silico neuroscience

    q-bio.NC 2026-05 unverdicted novelty 7.0

    TRIBE v2 is a multimodal AI model that predicts human brain activity more accurately than linear encoding models and recovers established neuroscientific findings through in-silico testing.

  44. Unified Multimodal Visual Tracking with Dual Mixture-of-Experts

    cs.CV 2026-05 unverdicted novelty 7.0

    OneTrackerV2 unifies multimodal tracking via Meta Merger and Dual Mixture-of-Experts to reach state-of-the-art results on five tasks and 12 benchmarks with efficiency and robustness when modalities are missing.

  45. iGENE: A Differentiable Flux-Tube Gyrokinetic Code in TensorFlow

    physics.plasm-ph 2026-05 unverdicted novelty 7.0

    A fully differentiable TensorFlow gyrokinetic code allows approximate gradients of nonlinear turbulence quantities to be used for outer-loop tasks such as profile prediction despite stochasticity.

  46. TCD-Arena: Assessing Robustness of Time Series Causal Discovery Methods Against Assumption Violations

    cs.LG 2026-05 unverdicted novelty 7.0

    TCD-Arena is a new customizable testing framework that runs millions of experiments to map how 33 different assumption violations affect time series causal discovery methods and shows ensembles can boost overall robustness.

  47. Mixture Prototype Flow Matching for Open-Set Supervised Anomaly Detection

    cs.CV 2026-05 unverdicted novelty 7.0

    MPFM uses flow matching with a Gaussian mixture prior on the velocity field and a mutual information maximizer to improve open-set anomaly detection over unimodal prototype methods.

  48. SpectraDINO: Bridging the Spectral Gap in Vision Foundation Models via Lightweight Adapters

    cs.CV 2026-05 unverdicted novelty 7.0

    SpectraDINO adapts frozen DINOv2 backbones to multispectral data via per-modality adapters and staged distillation with cosine, contrastive, patch, and neighborhood-structure losses, achieving SOTA on object detection...

  49. SignMAE: Segmentation-Driven Self-Supervised Learning for Sign Language Recognition

    cs.CV 2026-05 unverdicted novelty 7.0

    SignMAE uses segmentation-driven masking in a mask-and-reconstruct self-supervised task to learn fine-grained sign representations, achieving state-of-the-art accuracy on WLASL, NMFs-CSL, and Slovo with fewer frames a...

  50. Hydra-DP3: Frequency-Aware Right-Sizing of 3D Diffusion Policies for Visuomotor Control

    cs.RO 2026-05 conditional novelty 7.0

    Frequency analysis of smooth robot actions bounds denoising error to low-frequency modes, enabling a sub-1% parameter 3D diffusion policy with two-step inference that reaches SOTA on manipulation benchmarks.

  51. VAnim: Rendering-Aware Sparse State Modeling for Structure-Preserving Vector Animation

    cs.CV 2026-05 unverdicted novelty 7.0

    VAnim creates open-domain text-to-SVG animations via sparse state updates on a persistent DOM tree, identification-first planning, and rendering-aware RL with a new 134k-example benchmark.

  52. SplAttN: Bridging 2D and 3D with Gaussian Soft Splatting and Attention for Point Cloud Completion

    cs.CV 2026-05 conditional novelty 7.0

    SplAttN replaces hard projection with Gaussian soft splatting to avoid cross-modal entropy collapse, achieving SOTA point cloud completion on PCN and ShapeNet while maintaining visual cue dependency on KITTI.

  53. Arbitrarily Conditioned Hierarchical Flows for Spatiotemporal Events

    cs.LG 2026-05 unverdicted novelty 7.0

    ARCH is a hierarchical flow-based generative model that enables tractable conditional intensity computation and arbitrary conditioning for spatiotemporal event distributions.

  54. Physiology-Aware Masked Cross-Modal Reconstruction for Biosignal Representation Learning

    cs.LG 2026-05 unverdicted novelty 7.0

    xMAE pretrains biosignal representations via masked cross-modal reconstruction of temporally ordered signals like ECG and PPG, outperforming baselines on 15 of 19 downstream tasks including cardiovascular prediction a...

  55. AsymTalker: Identity-Consistent Long-Term Talking Head Generation via Asymmetric Distillation

    cs.LG 2026-05 unverdicted novelty 7.0

    AsymTalker maintains identity consistency in long-term diffusion talking-head videos by encoding temporal references from a static image and training a student model under inference-like conditions via asymmetric dist...

  56. Fusing Urban Structure and Semantics: A Conditional Diffusion Model for Cross-City OD Matrix Generation

    cs.LG 2026-05 unverdicted novelty 7.0

    SEDAN fuses graph-based urban semantics and spatial structure inside a conditional diffusion model to generate behaviorally plausible and geographically coherent OD matrices, reporting a 7.38% RMSE gain over the WEDAN...

  57. Purifying Multimodal Retrieval: Fragment-Level Evidence Selection for RAG

    cs.IR 2026-04 unverdicted novelty 7.0

    FES-RAG reframes multimodal RAG as fragment-level selection using Fragment Information Gain to outperform document-level methods with up to 27% relative CIDEr gains on M2RAG while shortening context.

  58. YOSE: You Only Select Essential Tokens for Efficient DiT-based Video Object Removal

    cs.CV 2026-04 unverdicted novelty 7.0

    YOSE accelerates DiT video object removal up to 2.5x by using BVI for adaptive token selection and DiffSim to simulate unmasked token effects, while preserving visual quality.

  59. Geometry-Based Neural-Network Prediction of Electron Localization Function Topology in Dense Hydrogen

    cond-mat.mtrl-sci 2026-04 unverdicted novelty 7.0

    A neural network predicts ELF topology in dense hydrogen from geometry with high accuracy and generalizes from fluid to crystalline phases.

  60. Learning Neural Operator Surrogates for the Black Hole Accretion Code

    astro-ph.HE 2026-04 unverdicted novelty 7.0

    Physics-informed Fourier neural operators recover plasmoid formation in sparse SRRMHD vortex data where data-only models fail, and transformer operators approximate AMR jet evolution, marking first reported uses in th...