pith. machine review for the scientific record. sign in

arxiv: 2307.08336 · v2 · submitted 2023-07-17 · 💻 cs.LG · cs.RO

Recognition: unknown

RAYEN: Imposition of Hard Convex Constraints on Neural Networks

Authors on Pith no claims yet
classification 💻 cs.LG cs.RO
keywords constraintsrayenconstraintconvexsatisfactionnetworknetworksneural
0
0 comments X
read the original abstract

Despite the numerous applications of convex constraints in Robotics, enforcing them within learning-based frameworks remains an open challenge. Existing techniques either fail to guarantee satisfaction at all times, or incur prohibitive computational costs. This paper presents RAYEN, a framework for imposing hard convex constraints on the output or latent variables of a neural network. RAYEN guarantees constraint satisfaction during both training and testing, for any input and any network weights. Unlike prior approaches, RAYEN avoids computationally expensive orthogonal projections, soft constraints, conservative approximations of the feasible set, and slow iterative corrections. RAYEN supports any combination of linear, convex quadratic, second-order cone (SOC), and linear matrix inequality (LMI) constraints, with negligible overhead compared to unconstrained networks. For instance, it imposes 1K quadratic constraints on a 1K-dimensional variable with only 8 ms of overhead compared to a network that does not enforce these constraints. An LMI constraint with 300x300 dense matrices on a 10K-dimensional variable can be guaranteed with only 12 ms additional overhead. When used in neural networks that approximate the solution of constrained trajectory optimization problems, RAYEN runs 20 to 7468 times faster than state-of-the-art algorithms, while guaranteeing constraint satisfaction at all times and achieving a near-optimal cost (<1.5% optimality gap). Finally, we demonstrate RAYEN's ability to enforce actuator constraints on a learned locomotion policy by validating constraint satisfaction in both simulation and real-world experiments on a quadruped robot. The code is available at https://github.com/leggedrobotics/rayen

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Solving Max-Cut to Global Optimality via Feasibility-Preserving Graph Neural Networks

    cs.LG 2026-05 unverdicted novelty 7.0

    A Max-Cut-specific graph neural network predicts primal- and dual-feasible SDP solutions in linearithmic time, cutting bounding costs in exact branch-and-bound by up to 10.6 times versus a commercial SDP solver while ...

  2. LMI-Net: Linear Matrix Inequality--Constrained Neural Networks via Differentiable Projection Layers

    cs.LG 2026-04 unverdicted novelty 7.0

    LMI-Net enforces LMI constraints in neural networks by construction using a differentiable projection layer based on Douglas-Rachford splitting and implicit differentiation.

  3. Improving Feasibility via Fast Autoencoder-Based Projections

    cs.LG 2026-04 unverdicted novelty 7.0

    An adversarially trained autoencoder learns a convex latent space to enable rapid approximate projections that enforce nonconvex constraints in optimization and reinforcement learning.

  4. Parametric Nonconvex Optimization via Convex Surrogates

    math.OC 2026-04 unverdicted novelty 6.0

    A surrogate for parametric nonconvex optimization is constructed as the minimum of convex-monotonic function compositions and solved via parallel convex optimization, with a proof-of-concept on path tracking.