Local LMO is a new projection-free method that achieves the convergence rates of projected gradient descent for constrained optimization by using local linear minimization oracles over small balls.
Pinet: Optimizing hard-constrained neural networks with orthogonal projection layers
4 Pith papers cite this work. Polarity classification is still indexing.
years
2026 4verdicts
UNVERDICTED 4representative citing papers
NLPOpt-Net is an unsupervised neural architecture that learns parametric solutions to constrained NLPs by pairing a backbone network with quadratic projection layers that guarantee feasibility and near-zero constraint violations.
AdamFLIP treats PDE constraint residuals in PINNs as a controlled dynamical system, computes Lagrange multipliers via feedback linearization to drive residuals to zero, and applies Adam-style adaptation to the resulting gradient for scalable hard-constrained training.
HardNet++ enforces general nonlinear equality and inequality constraints on neural network outputs via an end-to-end trainable iterative process using damped local linearizations.
citing papers explorer
-
Local LMO: Constrained Gradient Optimization via a Local Linear Minimization Oracle
Local LMO is a new projection-free method that achieves the convergence rates of projected gradient descent for constrained optimization by using local linear minimization oracles over small balls.
-
NLPOpt-Net: A Learning Method for Nonlinear Optimization with Feasibility Guarantees
NLPOpt-Net is an unsupervised neural architecture that learns parametric solutions to constrained NLPs by pairing a backbone network with quadratic projection layers that guarantee feasibility and near-zero constraint violations.
-
AdamFLIP: Adaptive Momentum Feedback Linearization Optimization for Hard Constrained PINN Training
AdamFLIP treats PDE constraint residuals in PINNs as a controlled dynamical system, computes Lagrange multipliers via feedback linearization to drive residuals to zero, and applies Adam-style adaptation to the resulting gradient for scalable hard-constrained training.
-
HardNet++: Nonlinear Constraint Enforcement in Neural Networks
HardNet++ enforces general nonlinear equality and inequality constraints on neural network outputs via an end-to-end trainable iterative process using damped local linearizations.