pith. machine review for the scientific record. sign in

arxiv: 2206.14234 · v3 · submitted 2022-06-28 · 🧮 math.OC · cs.LG

Recognition: unknown

PyEPO: A PyTorch-based End-to-End Predict-then-Optimize Library for Linear and Integer Programming

Authors on Pith no claims yet
classification 🧮 math.OC cs.LG
keywords pyepopredict-then-optimizeend-to-endoptimizationalgorithmsintegerlibrarylinear
0
0 comments X
read the original abstract

In deterministic optimization, it is typically assumed that all problem parameters are fixed and known. In practice, however, some parameters may be a priori unknown but can be estimated from contextual information. A typical predict-then-optimize approach separates predictions and optimization into two distinct stages. Recently, end-to-end predict-then-optimize has emerged as an attractive alternative. This work introduces the PyEPO package, a PyTorch-based end-to-end predict-then-optimize library in Python. To the best of our knowledge, PyEPO (pronounced like \textit{pineapple} with a silent ``n") is the first such generic tool for linear and integer programming with predicted objective function coefficients. It includes various algorithms such as surrogate decision losses, black-box solvers, and perturbed methods. PyEPO offers a user-friendly interface for defining new optimization problems, applying state-of-the-art algorithms, and using custom neural network architectures. We conducted experiments comparing various methods on problems such as Shortest Path, Multiple Knapsack, and Traveling Salesperson Problem, and discussed empirical insights that may guide future research. PyEPO and its documentation are available at https://github.com/khalil-research/PyEPO.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. IGT-OMD: Implicit Gradient Transport for Decision-Focused Learning under Delayed Feedback

    cs.LG 2026-05 unverdicted novelty 7.0

    IGT-OMD reduces gradient transport error from quadratic to linear in delay length for delayed bilevel optimization and achieves sublinear regret with adaptive steps.

  2. Decision-Focused Learning via Tangent-Space Projection of Prediction Error

    cs.LG 2026-05 unverdicted novelty 7.0

    Regret gradients in DFL are the tangent-space projection of prediction error scaled by curvature, enabling efficient direct computation without differentiating through solvers.