Recognition: unknown
RAYEN: Imposition of Hard Convex Constraints on Neural Networks
read the original abstract
Despite the numerous applications of convex constraints in Robotics, enforcing them within learning-based frameworks remains an open challenge. Existing techniques either fail to guarantee satisfaction at all times, or incur prohibitive computational costs. This paper presents RAYEN, a framework for imposing hard convex constraints on the output or latent variables of a neural network. RAYEN guarantees constraint satisfaction during both training and testing, for any input and any network weights. Unlike prior approaches, RAYEN avoids computationally expensive orthogonal projections, soft constraints, conservative approximations of the feasible set, and slow iterative corrections. RAYEN supports any combination of linear, convex quadratic, second-order cone (SOC), and linear matrix inequality (LMI) constraints, with negligible overhead compared to unconstrained networks. For instance, it imposes 1K quadratic constraints on a 1K-dimensional variable with only 8 ms of overhead compared to a network that does not enforce these constraints. An LMI constraint with 300x300 dense matrices on a 10K-dimensional variable can be guaranteed with only 12 ms additional overhead. When used in neural networks that approximate the solution of constrained trajectory optimization problems, RAYEN runs 20 to 7468 times faster than state-of-the-art algorithms, while guaranteeing constraint satisfaction at all times and achieving a near-optimal cost (<1.5% optimality gap). Finally, we demonstrate RAYEN's ability to enforce actuator constraints on a learned locomotion policy by validating constraint satisfaction in both simulation and real-world experiments on a quadruped robot. The code is available at https://github.com/leggedrobotics/rayen
This paper has not been read by Pith yet.
Forward citations
Cited by 4 Pith papers
-
Solving Max-Cut to Global Optimality via Feasibility-Preserving Graph Neural Networks
A Max-Cut-specific graph neural network predicts primal- and dual-feasible SDP solutions in linearithmic time, cutting bounding costs in exact branch-and-bound by up to 10.6 times versus a commercial SDP solver while ...
-
LMI-Net: Linear Matrix Inequality--Constrained Neural Networks via Differentiable Projection Layers
LMI-Net enforces LMI constraints in neural networks by construction using a differentiable projection layer based on Douglas-Rachford splitting and implicit differentiation.
-
Improving Feasibility via Fast Autoencoder-Based Projections
An adversarially trained autoencoder learns a convex latent space to enable rapid approximate projections that enforce nonconvex constraints in optimization and reinforcement learning.
-
Parametric Nonconvex Optimization via Convex Surrogates
A surrogate for parametric nonconvex optimization is constructed as the minimum of convex-monotonic function compositions and solved via parallel convex optimization, with a proof-of-concept on path tracking.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.