pith. machine review for the scientific record. sign in

arxiv: 2605.10519 · v1 · submitted 2026-05-11 · 💻 cs.GT

Recognition: 2 theorem links

· Lean Theorem

Online Resource Allocation With General Constraints

Authors on Pith no claims yet

Pith reviewed 2026-05-12 04:00 UTC · model grok-4.3

classification 💻 cs.GT
keywords online resource allocationbest-of-both-worldsregret boundsbudget constraintsgeneral constraintsstochastic adversarialLagrangian method
0
0 comments X

The pith

An algorithm achieves O(sqrt(T)) regret against dynamic benchmarks for online resource allocation under both budget and general constraints.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper studies online resource allocation where decisions must respect budget limits as well as additional general constraints such as return-on-investment requirements. It introduces an algorithm that competes with a dynamic benchmark, attaining square-root regret in stochastic environments and alpha-regret of the same order in adversarial ones, with alpha set by the offline problem's feasibility margin. The method satisfies budgets exactly at every step and incurs only square-root cumulative violation on the general constraints. This setup captures realistic sequential decisions in advertising and revenue management where multiple economic rules must hold without advance knowledge of arrivals.

Core claim

By exploiting weak adaptivity to keep Lagrangian multipliers bounded, the algorithm obtains best-of-both-worlds performance: tilde-O(sqrt(T)) regret in the stochastic regime and alpha-regret of order tilde-O(sqrt(T)) in the adversarial regime against a dynamic benchmark, while enforcing strict budget feasibility and tilde-O(sqrt(T)) cumulative violation of general constraints, where alpha is determined by the positive feasibility margin of the corresponding offline problem.

What carries the argument

Weak adaptivity, which produces bounded Lagrangian multipliers even when general constraints lack the alignment that budgets provide.

If this is right

  • The same algorithm applies directly to online advertising campaigns that must meet both spending caps and ROI thresholds.
  • Classical budget-only methods are recovered as a special case when no general constraints are present.
  • The cumulative violation bound on general constraints remains sublinear even when the environment is chosen adversarially.
  • Performance guarantees continue to hold when the benchmark policy itself changes over time.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The emphasis on feasibility margins suggests that similar margin-based alpha factors could appear in other online problems mixing hard and soft constraints.
  • Testing the method on concrete revenue-management instances would reveal whether the hidden constant in alpha remains practical.
  • The weak-adaptivity technique may extend to settings where constraints arrive online rather than being fixed in advance.

Load-bearing premise

The offline problem possesses a positive feasibility margin that determines a positive alpha for the adversarial regret bound.

What would settle it

Construct an instance with zero feasibility margin in the offline problem and check whether the observed regret grows linearly with T rather than as sqrt(T).

Figures

Figures reproduced from arXiv: 2605.10519 by Alberto Marchesi, Eleonora Fidelia Chiefari, Francesco Emanuele Stradi, Matteo Castiglioni.

Figure 1
Figure 1. Figure 1: Illustration of the behavior of standard dual algorithms that do not use a weakly adaptive regret minimizers. Left: in the budget-constrained setting, once ∥λt∥1 becomes sufficiently large, the algorithm stops violating, and the dual variable starts decreasing. Right: under general con￾straints, violations may persist even when ∥λt∥1 is already large. 4.2 A Linear Upper Bound In standard Lagrangian framewo… view at source ↗
read the original abstract

Online resource allocation (ORA) is a fundamental framework for sequential decision-making problems under budget constraints, with applications ranging from online advertising to revenue management. In this work, we study a broader setting that includes both budget constraints and general constraints, extending the classical budget-only model. This extension is essential for modeling critical economic requirements, such as Return-on-Investment (ROI) constraints. We develop an algorithm that achieves best-of-both-world guarantees within this generalized framework. In particular, against a dynamic benchmark, our algorithm achieves $\widetilde{\mathcal O}(\sqrt{T})$ regret in the \emph{stochastic} regime and $\alpha$-regret of order $\widetilde{\mathcal O}(\sqrt{T})$ in the \emph{adversarial} regime, where $\alpha$ depends on the feasibility margin of the corresponding offline problem. At the same time, our algorithm guarantees strict satisfaction of the budget constraints and $\widetilde{\mathcal O}(\sqrt{T})$ cumulative violation for the general ones. From a technical perspective, introducing general constraints alongside budgets precludes the use of standard budget-focus methods. While budget methods rely on a zero-consumption ``safe'' action to ensure feasibility, general constraints are much less ``aligned'' towards feasibility. We overcome these difficulties with a new analysis that exploits \emph{weak adaptivity} to get boundedness of the Lagrangian multipliers and best-of-both-world guarantees.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript extends the online resource allocation framework to include both budget constraints and general constraints (e.g., ROI). It proposes an algorithm achieving best-of-both-worlds guarantees against a dynamic benchmark: stochastic regret of order O(sqrt(T)) and adversarial alpha-regret of order O(sqrt(T)), where alpha depends on the offline problem's feasibility margin. The algorithm strictly satisfies budget constraints while incurring O(sqrt(T)) cumulative violation on general constraints, via a new analysis that uses weak adaptivity to bound Lagrangian multipliers despite the lack of a zero-consumption safe action.

Significance. If the central analysis holds, the work meaningfully broadens classical budget-only ORA results to handle less-aligned general constraints that arise in economic applications. The best-of-both-worlds guarantees and the explicit dependence of alpha on the feasibility margin constitute a substantive technical advance; the exploitation of weak adaptivity to control multipliers is a potentially reusable idea for other non-aligned constraint settings.

major comments (2)
  1. [Abstract and offline-problem formulation] The abstract ties the alpha-regret bound directly to the offline feasibility margin, but the manuscript must explicitly define this margin (presumably in the offline-problem section) and prove that it yields a finite alpha without further alignment assumptions on the general constraints; this step is load-bearing for the adversarial claim.
  2. [Technical approach / regret analysis] The new analysis that invokes weak adaptivity to obtain bounded Lagrangian multipliers (despite general constraints lacking a safe action) is the key technical device; the manuscript should isolate this argument (likely in the regret-analysis section) and verify that the weak-adaptivity assumption is strictly weaker than standard full adaptivity while still sufficient for the O(sqrt(T)) bounds.
minor comments (2)
  1. [Abstract] Notation for the tilde-O and alpha-regret should be defined at first use and kept consistent with prior ORA literature.
  2. [Introduction] A brief comparison table or paragraph contrasting the new guarantees with existing budget-only results would improve readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

Thank you for your careful reading and constructive feedback on our manuscript. We appreciate the positive assessment of the technical contributions and will revise the paper to improve clarity and presentation as suggested. We address each major comment below.

read point-by-point responses
  1. Referee: [Abstract and offline-problem formulation] The abstract ties the alpha-regret bound directly to the offline feasibility margin, but the manuscript must explicitly define this margin (presumably in the offline-problem section) and prove that it yields a finite alpha without further alignment assumptions on the general constraints; this step is load-bearing for the adversarial claim.

    Authors: We agree that an explicit definition and supporting proof are necessary for rigor. In the revised manuscript, we will add a formal definition of the offline feasibility margin in Section 2 (Offline Problem Formulation). We will also insert a new lemma in the analysis section proving that any positive feasibility margin implies a finite alpha, relying only on the problem's feasibility structure and without additional alignment assumptions on the general constraints. This will make the adversarial guarantee self-contained. revision: yes

  2. Referee: [Technical approach / regret analysis] The new analysis that invokes weak adaptivity to obtain bounded Lagrangian multipliers (despite general constraints lacking a safe action) is the key technical device; the manuscript should isolate this argument (likely in the regret-analysis section) and verify that the weak-adaptivity assumption is strictly weaker than standard full adaptivity while still sufficient for the O(sqrt(T)) bounds.

    Authors: We thank the referee for identifying this central technical device. In the revision, we will extract the weak-adaptivity argument into a dedicated subsection of the regret-analysis section. We will formally define weak adaptivity, provide a simple counter-example demonstrating it is strictly weaker than full adaptivity, and verify that the weaker notion is sufficient to bound the Lagrangian multipliers and recover the O(sqrt(T)) stochastic and alpha-regret bounds. revision: yes

Circularity Check

0 steps flagged

No significant circularity

full rationale

The derivation relies on a new analysis exploiting weak adaptivity to bound Lagrangian multipliers for general constraints, combined with standard Lagrangian methods and external dynamic benchmarks for regret bounds. No load-bearing step reduces by construction to a self-definition, fitted input renamed as prediction, or self-citation chain; the alpha-regret and violation bounds are derived from the offline feasibility margin and weak adaptivity assumptions without circular reduction to inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review; no explicit free parameters, axioms, or invented entities are described. The method appears to build on standard online learning and Lagrangian relaxation without introducing new fitted constants or postulated entities.

pith-pipeline@v0.9.0 · 5553 in / 1079 out tokens · 43726 ms · 2026-05-12T04:00:02.560860+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

83 extracted references · 83 canonical work pages · 1 internal anchor

  1. [1]

    CoRR , volume =

    Elad Hazan , title =. CoRR , volume =. 2019 , url =. 1909.05207 , timestamp =

  2. [2]

    A Modern Introduction to Online Learning

    Francesco Orabona , title =. CoRR , volume =. 2019 , url =. 1912.13213 , timestamp =

  3. [3]

    Learning Adversarial

    Jin, Chi and Jin, Tiancheng and Luo, Haipeng and Sra, Suvrit and Yu, Tiancheng , booktitle =. Learning Adversarial. 2020 , editor =

  4. [4]

    Online Stochastic Shortest Path with Bandit Feedback and Unknown Transition Function , url =

    Rosenberg, Aviv and Mansour, Yishay , booktitle =. Online Stochastic Shortest Path with Bandit Feedback and Unknown Transition Function , url =

  5. [5]

    , biburl =

    Altman, E. , biburl =

  6. [6]

    Online Convex Optimization in Adversarial

    Rosenberg, Aviv and Mansour, Yishay , booktitle =. Online Convex Optimization in Adversarial. 2019 , editor =

  7. [7]

    Near-optimal Regret Bounds for Reinforcement Learning , url =

    Auer, Peter and Jaksch, Thomas and Ortner, Ronald , booktitle =. Near-optimal Regret Bounds for Reinforcement Learning , url =

  8. [8]

    International Conference on Machine Learning , pages=

    Minimax regret bounds for reinforcement learning , author=. International Conference on Machine Learning , pages=. 2017 , organization=

  9. [9]

    , title =

    Wei, Xiaohan and Yu, Hao and Neely, Michael J. , title =. Proc. ACM Meas. Anal. Comput. Syst. , month =. 2018 , issue_date =. doi:10.1145/3179415 , abstract =

  10. [10]

    Regret Minimization for Reinforcement Learning with Vectorial Feedback and Complex Objectives , url =

    Cheung, Wang Chi , booktitle =. Regret Minimization for Reinforcement Learning with Vectorial Feedback and Complex Objectives , url =

  11. [11]

    Model-Free Algorithm and Regret Analysis for MDPs with Long-Term Constraints , publisher =

    Bai, Qinbo and Aggarwal, Vaneet and Gattami, Ather , keywords =. Model-Free Algorithm and Regret Analysis for MDPs with Long-Term Constraints , publisher =. 2020 , copyright =. doi:10.48550/ARXIV.2006.05961 , url =

  12. [12]

    arXiv preprint arXiv:2003.02189 , year=

    Efroni, Yonathan and Mannor, Shie and Pirotta, Matteo , keywords =. Exploration-Exploitation in Constrained MDPs , publisher =. 2020 , copyright =. doi:10.48550/ARXIV.2003.02189 , url =

  13. [13]

    Proceedings of the 2nd Conference on Learning for Dynamics and Control , pages =

    Constrained Upper Confidence Reinforcement Learning , author =. Proceedings of the 2nd Conference on Learning for Dynamics and Control , pages =. 2020 , editor =

  14. [14]

    Advances in Neural Information Processing Systems , volume=

    Constrained episodic reinforcement learning in concave-convex and knapsack settings , author=. Advances in Neural Information Processing Systems , volume=

  15. [15]

    Upper Confidence Primal-Dual Reinforcement Learning for CMDP with Adversarial Loss , url =

    Qiu, Shuang and Wei, Xiaohan and Yang, Zhuoran and Ye, Jieping and Wang, Zhaoran , booktitle =. Upper Confidence Primal-Dual Reinforcement Learning for CMDP with Adversarial Loss , url =

  16. [16]

    A Provably-Efficient Model-Free Algorithm for Constrained Markov Decision Processes , publisher =

    Wei, Honghao and Liu, Xin and Ying, Lei , keywords =. A Provably-Efficient Model-Free Algorithm for Constrained Markov Decision Processes , publisher =. 2021 , copyright =. doi:10.48550/ARXIV.2106.01577 , url =

  17. [17]

    Advances in Neural Information Processing Systems , volume=

    A unifying framework for online optimization with long-term constraints , author=. Advances in Neural Information Processing Systems , volume=

  18. [18]

    Tsitsiklis and Jia Yuan Yu , title =

    Shie Mannor and John N. Tsitsiklis and Jia Yuan Yu , title =. Journal of Machine Learning Research , year =

  19. [19]

    2006 , publisher=

    Prediction, learning, and games , author=. 2006 , publisher=

  20. [20]

    arXiv preprint arXiv:2003.05555 , year=

    Provably efficient model-free algorithm for MDPs with peak constraints , author=. arXiv preprint arXiv:2003.05555 , year=

  21. [21]

    International Conference on Machine Learning , pages=

    Cautious regret minimization: Online optimization with long-term budget constraints , author=. International Conference on Machine Learning , pages=. 2019 , organization=

  22. [22]

    Proceedings of the 39th International Conference on Machine Learning , pages =

    Online Learning with Knapsacks: the Best of Both Worlds , author =. Proceedings of the 39th International Conference on Machine Learning , pages =. 2022 , editor =

  23. [23]

    2018 , publisher=

    Reinforcement learning: An introduction , author=. 2018 , publisher=

  24. [24]

    2014 , publisher=

    Markov decision processes: discrete stochastic dynamic programming , author=. 2014 , publisher=

  25. [25]

    2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) , pages=

    Safe reinforcement learning for autonomous vehicles through parallel constrained policy optimization , author=. 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) , pages=. 2020 , organization=

  26. [26]

    2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages=

    Safe reinforcement learning on autonomous vehicles , author=. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages=. 2018 , organization=

  27. [27]

    Proceedings of the 27th ACM International Conference on Information and Knowledge Management , pages=

    Budget constrained bidding by model-free reinforcement learning in display advertising , author=. Proceedings of the 27th ACM International Conference on Information and Knowledge Management , pages=

  28. [28]

    Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining , pages=

    A Unified Solution to Constrained Bidding in Online Display Advertising , author=. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining , pages=

  29. [29]

    Proceedings of the FAccTRec Workshop, Online , pages=

    Building healthy recommendation sequences for everyone: A safe reinforcement learning approach , author=. Proceedings of the FAccTRec Workshop, Online , pages=

  30. [30]

    the Eighth Ad Auction Workshop , volume=

    Repeated auctions under budget constraints: Optimal bidding strategies and equilibria , author=. the Eighth Ad Auction Workshop , volume=. 2012 , organization=

  31. [31]

    Management Science , volume=

    Learning in repeated auctions with budgets: Regret minimization and equilibrium , author=. Management Science , volume=. 2019 , publisher=

  32. [32]

    Mathematics of Operations Research , volume=

    Online Markov decision processes , author=. Mathematics of Operations Research , volume=. 2009 , publisher=

  33. [33]

    Advances in Neural Information Processing Systems , volume=

    Online Markov decision processes under bandit feedback , author=. Advances in Neural Information Processing Systems , volume=

  34. [34]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Provably efficient primal-dual reinforcement learning for CMDPs with non-stationary objectives and constraints , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  35. [35]

    International Conference on Artificial Intelligence and Statistics , pages=

    Provably efficient model-free algorithms for non-stationary CMDPs , author=. International Conference on Artificial Intelligence and Statistics , pages=. 2023 , organization=

  36. [36]

    Advances in Neural Information Processing Systems , volume=

    No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions , author=. Advances in Neural Information Processing Systems , volume=

  37. [37]

    Conference On Learning Theory , pages=

    More adaptive algorithms for adversarial bandits , author=. Conference On Learning Theory , pages=. 2018 , organization=

  38. [38]

    Explore no more: Improved high-probability regret bounds for non-stochastic bandits , url =

    Neu, Gergely , booktitle =. Explore no more: Improved high-probability regret bounds for non-stochastic bandits , url =

  39. [39]

    Conference on Learning Theory , pages=

    Corralling a band of bandit algorithms , author=. Conference on Learning Theory , pages=. 2017 , organization=

  40. [40]

    arXiv preprint arXiv:2403.03672 , year=

    Learning Adversarial MDPs with Stochastic Hard Constraints , author=. arXiv preprint arXiv:2403.03672 , year=

  41. [41]

    International Conference on Algorithmic Learning Theory , pages=

    A model selection approach for corruption robust reinforcement learning , author=. International Conference on Algorithmic Learning Theory , pages=. 2022 , organization=

  42. [42]

    Conference on Learning Theory , pages=

    Corruption-robust exploration in episodic reinforcement learning , author=. Conference on Learning Theory , pages=. 2021 , organization=

  43. [43]

    Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining , pages=

    A unified solution to constrained bidding in online display advertising , author=. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining , pages=

  44. [44]

    , publisher =

    Altman, E. , publisher =

  45. [45]

    Proceedings of the 2nd Conference on Learning for Dynamics and Control , pages =

    Constrained Upper Confidence Reinforcement Learning , author =. Proceedings of the 2nd Conference on Learning for Dynamics and Control , pages =

  46. [46]

    Journal of Machine Learning Research , volume=

    Provably Sample-Efficient Model-Free Algorithm for MDPs with Peak Constraints , author=. Journal of Machine Learning Research , volume=

  47. [47]

    Advances in Neural Information Processing Systems , volume=

    Learning policies with zero or bounded constraint violation for constrained mdps , author=. Advances in Neural Information Processing Systems , volume=

  48. [48]

    arXiv preprint arXiv:2302.04375 , year=

    A Near-Optimal Algorithm for Safe Reinforcement Learning Under Instantaneous Hard Constraints , author=. arXiv preprint arXiv:2302.04375 , year=

  49. [49]

    , title =

    Wei, Xiaohan and Yu, Hao and Neely, Michael J. , title =. Proc. ACM Meas. Anal. Comput. Syst. , numpages =. 2018 , publisher =

  50. [50]

    Upper Confidence Primal-Dual Reinforcement Learning for CMDP with Adversarial Loss , volume =

    Qiu, Shuang and Wei, Xiaohan and Yang, Zhuoran and Ye, Jieping and Wang, Zhaoran , booktitle =. Upper Confidence Primal-Dual Reinforcement Learning for CMDP with Adversarial Loss , volume =

  51. [51]

    Online Learning in

    Stradi, Francesco Emanuele and Germano, Jacopo and Genalti, Gianmarco and Castiglioni, Matteo and Marchesi, Alberto and Gatti, Nicola , booktitle =. Online Learning in. 2024 , editor =

  52. [52]

    arXiv preprint arXiv:2410.02269 , year=

    Best-of-Both-Worlds Policy Optimization for CMDPs with Bandit Feedback , author=. arXiv preprint arXiv:2410.02269 , year=

  53. [53]

    International Conference on Machine Learning , pages=

    Improved corruption robust algorithms for episodic reinforcement learning , author=. International Conference on Machine Learning , pages=. 2021 , organization=

  54. [54]

    2024 , eprint=

    No-Regret is not enough! Bandits with General Constraints through Adaptive Regret Minimization , author=. 2024 , eprint=

  55. [55]

    Operations Research , volume=

    The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems , author=. Operations Research , volume=. 2023 , publisher=

  56. [56]

    Proceedings of the 41st International Conference on Machine Learning (ICML) , pages=

    Online Learning under Budget and ROI Constraints via Weak Adaptivity , author=. Proceedings of the 41st International Conference on Machine Learning (ICML) , pages=. 2024 , organization=

  57. [57]

    2012 , eprint=

    Trading Regret for Efficiency: Online Convex Optimization with Long Term Constraints , author=. 2012 , eprint=

  58. [58]

    Bandits with Knapsacks , url=

    Badanidiyuru, Ashwinkumar and Kleinberg, Robert and Slivkins, Aleksandrs , year=. Bandits with Knapsacks , url=. doi:10.1109/focs.2013.30 , booktitle=

  59. [59]

    2014 , eprint=

    Bandits with concave rewards and convex knapsacks , author=. 2014 , eprint=

  60. [60]

    2023 , eprint=

    Adversarial Bandits with Knapsacks , author=. 2023 , eprint=

  61. [61]

    Devanur , title =

    Shipra Agrawal and Nikhil R. Devanur , title =. CoRR , volume =. 2014 , url =. 1410.7596 , timestamp =

  62. [62]

    Mehta, Aranyak and Saberi, Amin and Vazirani, Umesh and Vazirani, Vijay , title =. J. ACM , month = oct, pages =. 2007 , issue_date =. doi:10.1145/1284320.1284321 , abstract =

  63. [63]

    2023 , eprint=

    Online Bidding Algorithms for Return-on-Spend Constrained Advertisers , author=. 2023 , eprint=

  64. [64]

    Budget Constrained Bidding in Keyword Auctions and Online Knapsack Problems

    Zhou, Yunhong and Chakrabarty, Deeparnab and Lukose, Rajan. Budget Constrained Bidding in Keyword Auctions and Online Knapsack Problems. Internet and Network Economics. 2008

  65. [65]

    2004 , publisher=

    The Theory and Practice of Revenue Management , author=. 2004 , publisher=. doi:10.1007/b139000 , isbn=

  66. [66]

    Ball and Maurice Queyranne , title =

    Michael O. Ball and Maurice Queyranne , title =. Operations Research , volume =. 2009 , publisher =. doi:10.1287/opre.1080.0654 , url =

  67. [67]

    , title =

    Devanur, Nikhil and Hayes, Thomas P. , title =. ACM Conference on Electronic Commerce , year =

  68. [68]

    Mirrokni and Clifford Stein , title =

    Jon Feldman and Monika Henzinger and Nitish Korula and Vahab S. Mirrokni and Clifford Stein , title =. CoRR , volume =. 2010 , url =. 1001.5076 , timestamp =

  69. [69]

    CoRR , volume =

    Shipra Agrawal and Zizhuo Wang and Yinyu Ye , title =. CoRR , volume =. 2009 , url =. 0911.2974 , timestamp =

  70. [70]

    Primal Beats Dual on Online Packing LPs in the Random-Order Model , journal =

    Thomas Kesselheim and Klaus Radke and Andreas T. Primal Beats Dual on Online Packing LPs in the Random-Order Model , journal =. 2013 , url =. 1311.2578 , timestamp =

  71. [71]

    Devanur and Kamal Jain and Balasubramanian Sivan and Christopher A

    Nikhil R. Devanur and Kamal Jain and Balasubramanian Sivan and Christopher A. Wilkens , title =. CoRR , volume =. 2019 , url =. 1903.03944 , timestamp =

  72. [72]

    CoRR , volume =

    Xiaocheng Li and Chunlin Sun and Yinyu Ye , title =. CoRR , volume =. 2020 , url =. 2003.02513 , timestamp =

  73. [73]

    2022 , eprint=

    Near-Optimal Primal-Dual Algorithms for Quantity-Based Network Revenue Management , author=. 2022 , eprint=

  74. [74]

    Proceedings of the 15th Annual European Conference on Algorithms , pages =

    Buchbinder, Niv and Jain, Kamal and Naor, Joseph Seffi , title =. Proceedings of the 15th Annual European Conference on Algorithms , pages =. 2007 , isbn =

  75. [75]

    2011 , url=

    Dynamic Allocation Problems with Volatile Demand , author=. 2011 , url=

  76. [76]

    Mirrokni , title =

    Hossein Esfandiari and Nitish Korula and Vahab S. Mirrokni , title =. CoRR , volume =. 2017 , url =. 1711.05764 , timestamp =

  77. [77]

    Operations Research , volume=

    A dynamic near-optimal algorithm for online linear programming , author=. Operations Research , volume=. 2014 , publisher=

  78. [78]

    The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

    No-Regret Learning Under Adversarial Resource Constraints: A Spending Plan Is All You Need! , author=. The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

  79. [79]

    CoRR , volume =

    Thomas Kesselheim and Sahil Singla , title =. CoRR , volume =. 2020 , url =. 2010.07346 , timestamp =

  80. [80]

    2015 , eprint=

    Adaptive Algorithms for Online Convex Optimization with Long-term Constraints , author=. 2015 , eprint=

Showing first 80 references.