LCDD creates sparse carriers for SFT behaviors that SFT-Eraser can reverse, with ablations showing the sparse structure enables causal control.
Learning sparse neural net- works throughl 0regularization.arXiv preprint arXiv:1712.01312
4 Pith papers cite this work. Polarity classification is still indexing.
abstract
We propose a practical method for $L_0$ norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of $L_0$ regularization. However, since the $L_0$ norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected $L_0$ norm of the resulting gated weights is differentiable with respect to the distribution parameters. We further propose the \emph{hard concrete} distribution for the gates, which is obtained by "stretching" a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.
years
2026 4verdicts
UNVERDICTED 4representative citing papers
A latent mediation framework with sparse autoencoders enables non-additive token-level influence attribution in LLMs by learning orthogonal features and back-propagating attributions.
Light-FMP prunes features and model parameters in deep recommender systems by pretraining a hard-concrete masking layer on data subsets, then retraining the reduced model to improve both efficiency and accuracy over prior methods.
paFEMU enables rapid constitutive model discovery by integrating sparse regression, physics augmentation, and finite element adjoint optimization on multi-modal data for interpretable transfer learning.
citing papers explorer
-
Crafting Reversible SFT Behaviors in Large Language Models
LCDD creates sparse carriers for SFT behaviors that SFT-Eraser can reverse, with ablations showing the sparse structure enables causal control.
-
Correcting Influence: Unboxing LLM Outputs with Orthogonal Latent Spaces
A latent mediation framework with sparse autoencoders enables non-additive token-level influence attribution in LLMs by learning orthogonal features and back-propagating attributions.
-
Light-FMP: Lightweight Feature and Model Pruning for Enhanced Deep Recommender Systems
Light-FMP prunes features and model parameters in deep recommender systems by pretraining a hard-concrete masking layer on data subsets, then retraining the reduced model to improve both efficiency and accuracy over prior methods.
-
Towards Rapid Constitutive Model Discovery from Multi-Modal Data: Physics Augmented Finite Element Model Updating (paFEMU)
paFEMU enables rapid constitutive model discovery by integrating sparse regression, physics augmentation, and finite element adjoint optimization on multi-modal data for interpretable transfer learning.