pith. machine review for the scientific record. sign in

arxiv: 2604.10778 · v1 · submitted 2026-04-12 · 🧮 math.OC

Recognition: unknown

Pseudoconvex Problems in Operational Decision Systems: Algorithms for Joint Learning and Optimization

Authors on Pith no claims yet

Pith reviewed 2026-05-10 15:26 UTC · model grok-4.3

classification 🧮 math.OC
keywords pseudoconvex optimizationjoint learning and optimizationoperational decision systemsenergy managementiterative algorithmsconvergence analysismulti-objective optimizationbilevel problems
0
0 comments X

The pith

A simultaneous iterative framework updates both machine learning models and pseudoconvex objectives to solve joint learning-optimization problems in decision systems.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes a way to handle decision problems where an outer optimization uses a pseudoconvex objective that relaxes convexity, while an inner part learns a model from historical data. This structure appears in energy planning with goals like higher renewable use alongside lower costs, or in retail with logit demand models. By updating the model parameters and the decisions together in an iterative loop, the method aims to find good joint solutions without solving the levels separately. A sympathetic reader would care because real systems often need this integration for timely, multi-goal decisions, and the authors provide algorithms that converge when the problems meet certain standard conditions. Tests on actual data show that allowing some inaccuracy in learning can save computation time while still producing usable results.

Core claim

The authors establish a simultaneous learning-and-optimization framework in which inner-level variables for machine learning training and outer-level variables for the pseudoconvex objective are updated iteratively. They develop convergent algorithms for this class of problems under realistic mathematical assumptions on the pseudoconvex outer objective and the inner learning task. Numerical experiments on real-world datasets demonstrate the performance and reveal trade-offs between the precision of the inner learning step and overall computational effort.

What carries the argument

The iterative simultaneous update procedure that refines both the parameters of the inner machine learning model and the decisions of the outer pseudoconvex optimization at each step.

If this is right

  • The framework produces solutions that trade off objectives such as renewable penetration against generation costs in energy systems.
  • Logit-based revenue maximization in retail can be handled jointly with demand model learning.
  • Convergence is guaranteed when the pseudoconvex structure and inner problem satisfy the stated realistic assumptions.
  • Inexact solutions to the inner learning problem can reduce runtime with limited impact on final decision quality.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This approach may apply to other operational settings with fractional or ratio-based objectives beyond energy and retail.
  • Users would benefit from practical tests to confirm the convergence conditions hold for their datasets.
  • The observed trade-off implies that adaptive stopping rules for the inner solver could further improve efficiency.

Load-bearing premise

The outer pseudoconvex objective and the inner learning problem must obey certain unspecified realistic mathematical assumptions that make the iterative scheme convergent.

What would settle it

An explicit example of a pseudoconvex fractional objective with a simple inner linear model for which the joint iteration fails to converge or reaches a suboptimal point despite satisfying the problem class description.

Figures

Figures reproduced from arXiv: 2604.10778 by Aswin Kannan, Zijun Li.

Figure 1
Figure 1. Figure 1: Hypervolume convergence for different outer- and inner-loop configurations. For any fixed [PITH_FULL_IMAGE:figures/full_fig_p024_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Hypervolume convergence for Out= 15 and In= 15 under different initial stepsizes γ0, using (a) fixed iteration Niters = 100 and (b) fixed time budgets T = 600. iterations (Figure 2a) and time (Figure 2b) are consistent, showing that an aggressive step size of γ0 = 1.0 yields superior performance. This setting allows the algorithm to converge rapidly. It achieves a hypervolume of approximately 0.85 after ju… view at source ↗
Figure 3
Figure 3. Figure 3: Total revenue for different combinations of outer loop ( [PITH_FULL_IMAGE:figures/full_fig_p026_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Total revenue for different combinations of outer loop ( [PITH_FULL_IMAGE:figures/full_fig_p026_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Total revenue with a fixed configuration of [PITH_FULL_IMAGE:figures/full_fig_p027_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Total revenue with a fixed configuration of [PITH_FULL_IMAGE:figures/full_fig_p027_6.png] view at source ↗
read the original abstract

We consider joint optimization and learning problems arising in real-time decision systems. While most existing work focuses primarily on convex, revenue-based objectives, we extend this line of research to multi-objective formulations. In energy systems, for instance, we incorporate metrics such as renewable penetration and generation costs. Our key focus, however, is on a class of problems with a pseudoconvex structure - a natural relaxation of convexity. Representative examples include fractional objectives in energy management and logit-based revenue models in retail. The outer-level problem optimizes these pseudoconvex objectives, while the inner-level problem involves training a machine learning model using historical data. Our contributions are twofold. First, we propose a simultaneous learning-and-optimization framework that iteratively updates both inner- and outer-level variables. Second, we develop convergent algorithms for these problem classes under realistic mathematical assumptions. Using real-world datasets, we evaluate the computational performance of our methods and highlight an important observation: there exist clear trade-offs between inexact learning and computational time when assessing final solution quality.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper considers joint learning-and-optimization problems in operational decision systems with pseudoconvex outer objectives (e.g., fractional energy-management metrics and logit revenue models). It proposes a simultaneous iterative framework that updates inner-level ML parameters and outer-level decision variables together, develops convergent algorithms for this class under realistic mathematical assumptions, and reports computational results on real-world datasets that illustrate trade-offs between inexact inner learning and solution quality.

Significance. If the convergence claims can be made rigorous with explicit, verifiable assumptions and the algorithms prove efficient on the cited problem classes, the work would usefully extend bilevel-style joint learning-optimization beyond convex revenue objectives to practically relevant pseudoconvex settings. The real-data evaluation is a strength that grounds the trade-off observation, though the absence of baseline comparisons and quantitative tables in the abstract limits immediate assessment of practical gains.

major comments (2)
  1. [Abstract] Abstract (and the algorithmic contribution): the central claim that 'convergent algorithms ... under realistic mathematical assumptions' are developed is load-bearing, yet the assumptions are never stated explicitly (no definition of the precise pseudoconvexity notion employed, no Lipschitz or smoothness conditions on the outer objective, no strong-convexity or gradient-Lipschitz requirements on the inner learning problem). Without these, it is impossible to check whether the fractional-energy or logit examples satisfy the hypotheses needed for the simultaneous-update scheme to converge.
  2. The manuscript supplies no proof sketches, theorem statements, or explicit convergence-rate results for the iterative scheme. This omission prevents verification of the second stated contribution and leaves open the possibility that the iteration diverges or reaches poor stationary points when the (unspecified) assumptions fail.
minor comments (2)
  1. The abstract refers to 'real-world datasets' and 'computational performance' but provides no numerical tables, baseline methods, or quantitative metrics (e.g., objective values, run times, or solution quality gaps). Adding such tables would make the evaluation section self-contained.
  2. Notation for the inner/outer variables and the pseudoconvex objective should be introduced with a clear mathematical formulation early in the paper to aid readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed comments. We address each major point below and outline the planned revisions.

read point-by-point responses
  1. Referee: [Abstract] Abstract (and the algorithmic contribution): the central claim that 'convergent algorithms ... under realistic mathematical assumptions' are developed is load-bearing, yet the assumptions are never stated explicitly (no definition of the precise pseudoconvexity notion employed, no Lipschitz or smoothness conditions on the outer objective, no strong-convexity or gradient-Lipschitz requirements on the inner learning problem). Without these, it is impossible to check whether the fractional-energy or logit examples satisfy the hypotheses needed for the simultaneous-update scheme to converge.

    Authors: We agree that the assumptions must be stated explicitly to support verification of the claims. In the revised manuscript we will expand the abstract and introduction to include: (i) the precise definition of pseudoconvexity used for the outer objective, (ii) the Lipschitz and smoothness conditions imposed on the outer function, and (iii) the strong-convexity or gradient-Lipschitz requirements on the inner learning problem. We will also add a short paragraph confirming that the fractional energy-management metrics and logit revenue models satisfy these conditions under standard, verifiable assumptions on the data. revision: yes

  2. Referee: [—] The manuscript supplies no proof sketches, theorem statements, or explicit convergence-rate results for the iterative scheme. This omission prevents verification of the second stated contribution and leaves open the possibility that the iteration diverges or reaches poor stationary points when the (unspecified) assumptions fail.

    Authors: We acknowledge that the current manuscript does not contain formal theorem statements, proof sketches, or explicit convergence-rate results in the main text. In the revision we will insert a dedicated subsection that states the main convergence theorem for the simultaneous-update scheme, provides a concise proof sketch, and reports the convergence rates (sublinear in the general case, with faster rates under additional strong-convexity assumptions). This addition will allow readers to verify the claims directly. revision: yes

Circularity Check

0 steps flagged

No circularity detected in the joint learning-optimization framework or convergence claims.

full rationale

The paper proposes a simultaneous inner/outer update framework for pseudoconvex outer objectives paired with inner ML training, then states that convergent algorithms exist under 'realistic mathematical assumptions.' No equations, derivations, or self-citations are exhibited that reduce any claimed result to a fitted parameter, self-defined quantity, or prior author work by construction. The abstract and contributions remain self-contained against external benchmarks; the unspecified assumptions affect verifiability of convergence but do not create a definitional or predictive loop within the presented material.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Only the abstract is available; no explicit free parameters, axioms, or invented entities are stated. The convergence claim implicitly rests on unlisted mathematical assumptions about pseudoconvexity and the learning problem.

pith-pipeline@v0.9.0 · 5474 in / 1102 out tokens · 28698 ms · 2026-05-10T15:26:02.956856+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

62 extracted references · 1 canonical work pages

  1. [1]

    Shanbhag

    Hesam Ahmadi and Uday V. Shanbhag. On the resolution of misspecified convex optimization and monotone variational inequality problems.Computational Optimization and Applications, 77:125–161, 2020

  2. [2]

    Simultaneous learning and optimization via misspecified saddle point problems

    Mohammad Mahdi Ahmadi and Erfan Yazdandoost Hamedani. Simultaneous learning and optimization via misspecified saddle point problems. 10 2025

  3. [3]

    Investing in wind energy using bi-level linear fractional programming.Energies, 16(13):4952, 2023

    Adel F Alrasheedi, Ahmad M Alshamrani, and Khalid A Alnowibet. Investing in wind energy using bi-level linear fractional programming.Energies, 16(13):4952, 2023

  4. [4]

    Shanbhag

    Necdet Serhat Aybat, Hesam Ahmadi, and Uday V. Shanbhag. On the analysis of inexact augmented lagrangian schemes for misspecified conic convex programs.IEEE Transactions on Automatic Control, 67(8):3981–3996, 2022

  5. [5]

    The big data newsvendor: Practical insights from machine learning.Operations Research, 67(1):90–108, 2019

    Gah-Yi Ban and Cynthia Rudin. The big data newsvendor: Practical insights from machine learning.Operations Research, 67(1):90–108, 2019

  6. [6]

    From predictive to prescriptive analytics.Management Science, 66(3):1025–1044, 2020

    Dimitris Bertsimas and Nathan Kallus. From predictive to prescriptive analytics.Management Science, 66(3):1025–1044, 2020

  7. [7]

    Optimization methods for large-scale machine learning.SIAM review, 60(2):223–311, 2018

    L´ eon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning.SIAM review, 60(2):223–311, 2018

  8. [8]

    Smard electricity market data, 2023

    Bundesnetzagentur (Federal Network Agency for Electricity, Gas, Telecommunications, Post and Railway). Smard electricity market data, 2023

  9. [9]

    Chan, Wayne S

    Choi S. Chan, Wayne S. Desarbo, and Patrick T. Harker. Product positioning under price competition.Management Science, 36(2):175–199, 1990

  10. [10]

    Chintagunta

    Pradeep K. Chintagunta. Endogeneity and heterogeneity in a probit demand model: Estimation using aggregate data.Marketing Science, 20(4):442–456, 2001

  11. [11]

    Springer, 2022

    Maxime C Cohen, Paul-Emile Gras, Arthur Pentecoste, and Renyu Zhang.Demand prediction in retail: A practical guide to leverage data and predictive analytics. Springer, 2022

  12. [12]

    Cohen, Ngai-Hang Zachary Leung, Kiran Panchamgam, Georgia Perakis, and An- thony Smith

    Maxime C. Cohen, Ngai-Hang Zachary Leung, Kiran Panchamgam, Georgia Perakis, and An- thony Smith. The impact of linear optimization on promotion planning.Operations Research, 65(2):446–468, 2017

  13. [13]

    Data aggregation and demand prediction

    Maxime C Cohen, Renyu Zhang, and Kevin Jiao. Data aggregation and demand prediction. Operations Research, 70(5):2597–2618, 2022

  14. [14]

    An augmented weighted tchebycheff method with adaptively chosen parameters for discrete bicriteria optimization problems.Com- puters and Operations Research, 39(12):2929–2943, 2012

    Kerstin D¨ achert, Jochen Gorski, and Kathrin Klamroth. An augmented weighted tchebycheff method with adaptively chosen parameters for discrete bicriteria optimization problems.Com- puters and Operations Research, 39(12):2929–2943, 2012

  15. [15]

    Multi-objective optimisation using evolutionary algorithms: an introduction

    Kalyanmoy Deb. Multi-objective optimisation using evolutionary algorithms: an introduction. InMulti-objective evolutionary optimisation for product design and manufacturing, pages 3–34. Springer, 2011

  16. [16]

    A fast and elitist multiobjective genetic algorithm: Nsga-ii.IEEE transactions on evolutionary computation, 6(2):182–197, 2002

    Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and TAMT Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii.IEEE transactions on evolutionary computation, 6(2):182–197, 2002

  17. [17]

    Multi-objective optimization

    Kalyanmoy Deb, Karthik Sindhya, and Jussi Hakanen. Multi-objective optimization. InDecision sciences, pages 161–200. CRC Press, 2016

  18. [18]

    A unified parsimonious model for structural demand estimation accounting for stockout and substitution.Available at SSRN 4134738, 2022

    Yiting Deng, Yuexing Li, and Jing-Sheng Jeannette Song. A unified parsimonious model for structural demand estimation accounting for stockout and substitution.Available at SSRN 4134738, 2022

  19. [19]

    Demand models, revenue curves and profit.Athens Journal of Business and Economics, 9(2):221–230, 2023

    Moshe Eben-Chaime. Demand models, revenue curves and profit.Athens Journal of Business and Economics, 9(2):221–230, 2023

  20. [20]

    Multiobjective combinatorial optimization—theory, methodology, and applications

    Matthias Ehrgott and Xavier Gandibleux. Multiobjective combinatorial optimization—theory, methodology, and applications. InMultiple criteria optimization: State of the art annotated bibliographic surveys, pages 369–444. Springer, 2002

  21. [21]

    predict, then optimize

    Adam N. Elmachtoub, Ryan McNellis, and Stefanie Sun. Smart “predict, then optimize”. Management Science, 68(1):19–43, 2022. 29

  22. [22]

    Solving unit commitment prob- lems with general ramp constraints.International Journal of Electrical Power & Energy Systems, pages 313–326, 2008

    Antonio Frangioni, Claudio Gentile, and Fabrizio Lacalandra. Solving unit commitment prob- lems with general ramp constraints.International Journal of Electrical Power & Energy Systems, pages 313–326, 2008

  23. [23]

    A dual-uncertainty two-stage fractional programming model for regional power systems in the province of ontario, canada

    J Huang, CZ Huang, and S Nie. A dual-uncertainty two-stage fractional programming model for regional power systems in the province of ontario, canada. 2022

  24. [24]

    Demand functions in decision modeling: A comprehensive survey and research directions.Decision Sciences, 44(3):557–609, 2013

    Jian Huang, Mingming Leng, and Mahmut Parlar. Demand functions in decision modeling: A comprehensive survey and research directions.Decision Sciences, 44(3):557–609, 2013

  25. [25]

    Bilevel optimization: Convergence analysis and enhanced design

    Kaiyi Ji, Junjie Yang, and Yingbin Liang. Bilevel optimization: Convergence analysis and enhanced design. InInternational conference on machine learning, pages 4882–4892. PMLR, 2021

  26. [26]

    Two-settlement systems for electricity markets under network uncertainty and market power.Journal of Regulatory Economics, 25(1):5–37, 2004

    Rajinish Kamat and Shmuel Oren. Two-settlement systems for electricity markets under network uncertainty and market power.Journal of Regulatory Economics, 25(1):5–37, 2004

  27. [27]

    Hyperaspo: Fusion of model and hyper param- eter optimization for multi-objective machine learning.Proceedings of the IEEE International Conference on Big Data, 2021

    Aswin Kannan, Anamitra Roy Choudhury, Vaibhav Saxena, Saurabh Manish Raje, Parikshit Ram, Ashish Verma, and Yogish Sabharwal. Hyperaspo: Fusion of model and hyper param- eter optimization for multi-objective machine learning.Proceedings of the IEEE International Conference on Big Data, 2021

  28. [28]

    On solving nonsmooth retail portfolio maximization problems using active signature methods.Computational Management Science, 23(1), 2026

    Aswin Kannan, Timo Kreimeier, and Andrea Walther. On solving nonsmooth retail portfolio maximization problems using active signature methods.Computational Management Science, 23(1), 2026

  29. [29]

    Distributed stochastic optimization under imperfect information

    Aswin Kannan, Angelia Nedi´ c, and Uday V Shanbhag. Distributed stochastic optimization under imperfect information. In2015 54th IEEE Conference on Decision and Control (CDC), pages 400–405. IEEE, 2015

  30. [30]

    Computerized promotion and markdown price schedul- ing, September 2020

    Aswin Kannan and Kiran Panchamgam. Computerized promotion and markdown price schedul- ing, September 2020. US Patent 10,776,803

  31. [31]

    Aswin Kannan and Uday V Shanbhag. Optimal stochastic extragradient schemes for pseu- domonotone stochastic variational inequality problems and their variants.Computational Op- timization and Applications, 74(3):779–820, 2019

  32. [32]

    Shanbhag, and Harrison M

    Aswin Kannan, Uday V. Shanbhag, and Harrison M. Kim. Addressing supply-side risk in uncertain power markets: Stochastic Nash models, scalable algorithms and error analysis.Op- timization Methods Software, 28(5):1095–1138, 2013

  33. [33]

    Khairullah and Vinay Pandit

    Zahid Y. Khairullah and Vinay Pandit. Estimation of a multiplicative model for locating large retail stores. In Naresh K. Malhotra, editor,Proceedings of the 1985 Academy of Marketing Science (AMS) Annual Conference, pages 206–210, Cham, 2015. Springer International Pub- lishing

  34. [34]

    Adaptive weighted sum method for multiobjective optimiza- tion: a new method for pareto front generation.Structural and multidisciplinary optimization, 31(2):105–116, 2006

    Il Yong Kim and Olivier L de Weck. Adaptive weighted sum method for multiobjective optimiza- tion: a new method for pareto front generation.Structural and multidisciplinary optimization, 31(2):105–116, 2006

  35. [35]

    A survey on mixed-integer programming techniques in bilevel optimization.EURO Journal on Computational Optimiza- tion, 9:100007, 2021

    Thomas Kleinert, Martine Labb´ e, Ivana Ljubi´ c, and Martin Schmidt. A survey on mixed-integer programming techniques in bilevel optimization.EURO Journal on Computational Optimiza- tion, 9:100007, 2021

  36. [36]

    G. M. Korpelevich. The extragradient method for finding saddle points and other problems. 12:747–756, January 1976

  37. [37]

    Learning joint models of prediction and optimization, 2024

    James Kotary, Vincenzo Di Vito, Jacob Cristopher, Pascal Van Hentenryck, and Ferdinando Fioretto. Learning joint models of prediction and optimization, 2024

  38. [38]

    PhD thesis, Humboldt-Universit¨ at zu Berlin, 2023

    Timo Kreimeier.Solving Constrained Piecewise Linear Optimization Problems by Exploiting the Abs-Linear Approach. PhD thesis, Humboldt-Universit¨ at zu Berlin, 2023

  39. [39]

    Demand models for the static retail price optimization problem – a revenue management perspective.OpenAccess Series in Informatics, 37, 05 2014

    Timo Kunz and Sven Crone. Demand models for the static retail price optimization problem – a revenue management perspective.OpenAccess Series in Informatics, 37, 05 2014

  40. [40]

    Algorithm switching for multiobjective predictions in renewable energy markets

    Zijun Li and Aswin Kannan. Algorithm switching for multiobjective predictions in renewable energy markets. InInternational Conference on Learning and Intelligent Optimization, pages 233–248. Springer, 2024

  41. [41]

    Incremental on-line learning: A review and comparison of state of the art algorithms.Neurocomputing, 275:1261–1274, 2018

    Viktor Losing, Barbara Hammer, and Heiko Wersing. Incremental on-line learning: A review and comparison of state of the art algorithms.Neurocomputing, 275:1261–1274, 2018

  42. [42]

    Decision-focused learning: Foundations, state of the art, benchmark and future opportunities.Journal of Artificial Intelligence Research, 80:1623–1701, August 2024

    Jayanta Mandi, James Kotary, Senne Berden, Maxime Mulamba, Victor Bucarey, Tias Guns, and Ferdinando Fioretto. Decision-focused learning: Foundations, state of the art, benchmark and future opportunities.Journal of Artificial Intelligence Research, 80:1623–1701, August 2024

  43. [43]

    Effective implementation of theε-constraint method in multi-objective math- ematical programming problems.Applied mathematics and computation, 213(2):455–465, 2009

    George Mavrotas. Effective implementation of theε-constraint method in multi-objective math- ematical programming problems.Applied mathematics and computation, 213(2):455–465, 2009

  44. [44]

    Econometric analysis of qualitative response models

    Daniel McFadden. Econometric analysis of qualitative response models. In Z. Griliches†and M. D. Intriligator, editors,Handbook of Econometrics, volume 2, chapter 24, pages 1395–1457. Elsevier, 1 edition, 1984. 30

  45. [45]

    Communication-efficient learning of deep networks from decentralized data,

    H Brendan McMahan et al. Communication-efficient learning of deep networks from decentral- ized data.arXiv preprint arXiv:1602.05629, 2016

  46. [46]

    Nash-cournot equilibria in power markets on a linearized dc network with arbitrage: Formulations and properties.Networks and Spatial Theory, 3(2):123–150, 2003

    Carolyn Metzler, Benjamin F Hobbs, and Jong-Shi Pang. Nash-cournot equilibria in power markets on a linearized dc network with arbitrage: Formulations and properties.Networks and Spatial Theory, 3(2):123–150, 2003

  47. [47]

    A Comparative Study of Demand Forecasting Models for a Multi-Channel Retail Company: A Novel Hybrid Machine Learning Approach.SN Operations Research Forum, 3(4):1–22, December 2022

    Arnab Mitra, Arnav Jain, Avinash Kishore, and Pravin Kumar. A Comparative Study of Demand Forecasting Models for a Multi-Channel Retail Company: A Novel Hybrid Machine Learning Approach.SN Operations Research Forum, 3(4):1–22, December 2022

  48. [48]

    Renewable energies and levies, website

    Netztransparenz. Renewable energies and levies, website. https://www.netztransparenz.de/de- de/Erneuerbare-Energien-und-Umlagen., 2023

  49. [49]

    System advisor model version 2022.11.21 (sam 2022.11.21) website

    NREL. System advisor model version 2022.11.21 (sam 2022.11.21) website. pv cost data. national renewable energy laboratory. golden, co. https://sam.nrel.gov/photovoltaic/pv-cost- component.html., 2022

  50. [50]

    The iv formulation and linear approximations of the ac optimal power flow problem

    Richard P O’Neill, Anya Castillo, and Mary Cain. The iv formulation and linear approximations of the ac optimal power flow problem. Technical report, FERC, 2012

  51. [51]

    A computational study of linear approxi- mations to the convex constraints in the iterative linear iv-acopf formulation

    Richard P O’Neill, Anya Castillo, and Mary Cain. A computational study of linear approxi- mations to the convex constraints in the iterative linear iv-acopf formulation. Technical report, FERC, 2013

  52. [52]

    O’Neill, KoryW

    RichardP. O’Neill, KoryW. Hedman, EricA. Krall, Anthony Papavasiliou, and ShmuelS. Oren. Economic analysis of the n-1 reliable unit commitment and transmission switching problem using duality concepts.Energy Systems, 1(2):165–195, 2010

  53. [53]

    Introduction to optimization

    Boris T Polyak. Introduction to optimization. 1987

  54. [54]

    Dynamic assortment opti- mization with a multinomial logit choice model and capacity constraint.Operations research, 58(6):1666–1680, 2010

    Paat Rusmevichientong, Zuo-Jun Max Shen, and David B Shmoys. Dynamic assortment opti- mization with a multinomial logit choice model and capacity constraint.Operations research, 58(6):1666–1680, 2010

  55. [55]

    Ratner, and Christopher R´ e

    John Schulman, Zhiyuan He, Alexander J. Ratner, and Christopher R´ e. Predictive model misspecification in decision-making. InProceedings of the 38th International Conference on Machine Learning (ICML), pages 9355–9365, 2021

  56. [56]

    Takayuki Shiina and John R. Birge. Stochastic unit commitment problem.International Trans- actions in Operational Research, 11(1):19–32, 2004

  57. [57]

    An interactive weighted tchebycheff procedure for multiple objective programming.Mathematical programming, 26:326–344, 1983

    Ralph E Steuer and Eng-Ung Choo. An interactive weighted tchebycheff procedure for multiple objective programming.Mathematical programming, 26:326–344, 1983

  58. [58]

    A fractional programming approach for retail category price optimization.Journal of Global Optimization, 48(2):263–277, 2010

    Shivaram Subramanian and Hanif D Sherali. A fractional programming approach for retail category price optimization.Journal of Global Optimization, 48(2):263–277, 2010

  59. [59]

    Optimality and duality results for bilevel programming problem using convexifactors.Journal of optimization theory and applications, 150(1):1–19, 2011

    Surjeet Kaur Suneja and Bhawna Kohli. Optimality and duality results for bilevel programming problem using convexifactors.Journal of optimization theory and applications, 150(1):1–19, 2011

  60. [60]

    S. J. Wang, S. M. Shahidehpour, D.S. Kirschen, S. Mokhtari, and G.D. Irisarri. Short-term generation scheduling with transmission and environmental constraints using an augmented lagrangian relaxation.Power Systems, IEEE Transactions on, 10(3):1294–1301, 1995

  61. [61]

    Multiobjective optimization using evolutionary algorithms—a comparative case study

    Eckart Zitzler and Lothar Thiele. Multiobjective optimization using evolutionary algorithms—a comparative case study. InInternational conference on parallel problem solving from nature, pages 292–301. Springer, 1998

  62. [62]

    Performance assessment of multiobjective optimizers: An analysis and review

    Eckart Zitzler, Lothar Thiele, Marco Laumanns, Carlos M Fonseca, and Viviane Grunert Da Fonseca. Performance assessment of multiobjective optimizers: An analysis and review. IEEE Transactions on evolutionary computation, 7(2):117–132, 2003. 31