pith. machine review for the scientific record. sign in

arxiv: 2605.08290 · v1 · submitted 2026-05-08 · 💻 cs.LG · cs.AI

Recognition: 2 theorem links

· Lean Theorem

Toward Optimal Regret in Robust Pricing: Decoupling Corruption and Time

Authors on Pith no claims yet

Pith reviewed 2026-05-12 02:15 UTC · model grok-4.3

classification 💻 cs.LG cs.AI
keywords dynamic pricingrobust online learningregret boundsbinary searchadversarial corruptiononline algorithmsrevenue maximization
0
0 comments X

The pith

A robust variant of binary search achieves regret O(C + log T) when corruption budget C is known in dynamic pricing.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper studies dynamic pricing over T rounds where a seller posts prices to an unknown fixed buyer valuation and receives binary purchase feedback, but an adversary can flip the feedback in at most C rounds. Prior robust methods left regret growing with both C and a log-log term in T, leaving open whether the two could be separated. The authors introduce a robust binary-search procedure that maintains candidate intervals while using carefully spaced price queries to isolate and absorb the corruptions. This yields the first bounds that are additive in C and only logarithmic in T, showing that corruption imposes a one-time cost rather than scaling with the horizon.

Core claim

We develop the first regret guarantees for robust dynamic pricing that decouple the dependence on the corruption C and the time horizon T. In particular, we develop a robust variant of binary search that achieves regret O(C + log T) when the corruption C is known and O(C + log² T) when the corruption is unknown.

What carries the argument

Robust binary search that maintains an active interval for the unknown valuation and selects prices to detect and absorb at most C corrupted binary feedbacks while narrowing the search range.

If this is right

  • Regret stays linear in the corruption budget C plus only logarithmic in T, so average per-round loss vanishes as T grows.
  • When C is known the bound is additive and nearly tight; when C is unknown an extra logarithmic factor appears but the dependence on T remains decoupled.
  • The seller can post prices that converge to the optimal revenue after a number of rounds linear in C plus logarithmic in T.
  • The same separation applies to the unknown-C case, replacing the single log T with log squared T.
  • Unlimited supply and binary feedback suffice for the guarantees, without needing richer observations.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The interval-narrowing logic could be adapted to other online decision problems that receive binary feedback under bounded corruptions, such as certain threshold-based bandit settings.
  • If the valuation drifts slowly over time, the same corruption-handling mechanism might be restarted periodically without losing the additive-C property.
  • The construction suggests that many binary-search-style algorithms in corrupted environments can be made robust by adding a small number of verification queries spaced at corruption scale.
  • In finite-horizon deployments the method gives an explicit stopping rule once the remaining interval width is smaller than the per-round revenue loss tolerance.

Load-bearing premise

The buyer's valuation stays fixed and unknown across rounds while the adversary can corrupt at most C binary purchase feedbacks.

What would settle it

Execute the algorithm against an adversary that corrupts exactly C feedbacks and measure whether cumulative regret stays within O(C + log T) for known C; any systematic excess would disprove the bound.

Figures

Figures reproduced from arXiv: 2605.08290 by Alberto Marchesi, Francesco Emanuele Stradi, Kalana Kalupahana, Matteo Castiglioni.

Figure 1
Figure 1. Figure 1: Tree of intervals of our robust dynamic pricing algorithm. The intervals induced by repeated halving I0 = [0, 1) can be represented as a binary tree of depth D = ⌈log2 T⌉. The green nodes are the on-path intervals leading to the correct leaf ℓ ⋆ , while the remaining intervals are off-path. The intervals at depth D are the leaves ℓj of the tree. as: RT ≤ O(C + log T + NT) +X ℓ∈L R(ℓ). We employ a potential… view at source ↗
read the original abstract

We design the first regret guarantees for robust dynamic pricing that decouple the dependence on the corruption $C$ and the time horizon $T$. In dynamic pricing, a seller with unlimited supply of a good interacts with a stream of buyers over \( T \) rounds, with the goal of maximizing revenue. At each round $t$, the seller posts a price $p_t$, and the buyer purchases the good only if their unknown valuation $v^\star$ exceeds this price. The seller observes only the binary feedback $\mathbb{I} \left\{ p_t \leq v^\star \right\}$, indicating whether a sale occurred. In the \emph{robust} pricing setting, a malicious adversary is allowed to corrupt this feedback in at most $C$ rounds. Even if the learner knows the corruption $C$, the best known regret bound is $\mathcal{O}(C\log\log T)$ by Gupta et al. [2025]. This leaves as an open problem to ``decouple'' the dependence on $C$ and $T$. In this work, we resolve this open problem. In particular, we develop a robust variant of binary search that achieves regret $\mathcal{O}(C+\log T)$ when the corruption $C$ is known and $\mathcal{O}(C+\log^2 T)$ when the corruption is unknown.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The manuscript develops a robust variant of binary search for the dynamic pricing problem with adversarial corruptions to the binary feedback. It achieves regret bounds of O(C + log T) when the corruption level C is known and O(C + log² T) when C is unknown. This resolves the open problem of decoupling the regret dependence on the corruption budget C and the time horizon T, improving on the prior O(C log log T) bound from Gupta et al. [2025]. The setting assumes a fixed unknown buyer valuation v*, unlimited supply, and at most C corruptions to the binary purchase observations.

Significance. If the claimed bounds hold, this work is significant because it provides the first regret guarantees that separate the additive corruption cost from the logarithmic search cost in robust dynamic pricing. The explicit algorithmic construction under standard assumptions (fixed v*, limited corruptions) closes the gap left by prior work and is likely tight up to constants given the information-theoretic lower bound of Ω(C + log T). The separation into known-C and unknown-C regimes demonstrates careful design that advances robust online learning in revenue management.

minor comments (3)
  1. The citation 'Gupta et al. [2025]' in the abstract and introduction should include the full bibliographic entry in the references section for completeness and verifiability.
  2. A short discussion or reference to the matching information-theoretic lower bound Ω(C + log T) in the introduction would better contextualize the optimality of the achieved upper bounds.
  3. Including pseudocode for the robust binary search procedure (both known-C and unknown-C variants) in the algorithm section would improve clarity and aid reproducibility of the construction.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their positive summary of our work, recognition of its significance in resolving the open problem of decoupling corruption and time-horizon dependence in robust dynamic pricing, and recommendation for minor revision. No specific major comments were raised in the report.

Circularity Check

0 steps flagged

No significant circularity; algorithmic construction is self-contained

full rationale

The paper resolves the open problem via an explicit algorithmic construction: a robust variant of binary search for dynamic pricing under at most C adversarial corruptions to binary feedback. The claimed regret bounds O(C + log T) (known C) and O(C + log² T) (unknown C) follow from standard elimination analysis augmented with repetition or majority voting to absorb the additive C cost. No load-bearing step reduces by definition to its inputs, renames a fitted parameter as a prediction, or relies on a self-citation chain whose cited result is itself unverified. The derivation is independent of the target bounds and externally falsifiable via the stated assumptions on fixed v*, unlimited supply, and bounded corruptions.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The result rests on standard domain assumptions for dynamic pricing and adversarial robustness; no free parameters or invented entities are introduced in the abstract.

axioms (2)
  • domain assumption Buyer's valuation v* is fixed and unknown across rounds
    Standard assumption in the dynamic pricing model described.
  • domain assumption Adversary corrupts at most C binary feedbacks
    Core definition of the robust setting.

pith-pipeline@v0.9.0 · 5548 in / 1130 out tokens · 39794 ms · 2026-05-12T02:15:58.886676+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

91 extracted references · 91 canonical work pages · 1 internal anchor

  1. [1]

    Theoretical Computer Science , volume=

    Exploration--exploitation tradeoff using variance estimates in multi-armed bandits , author=. Theoretical Computer Science , volume=. 2009 , publisher=

  2. [2]

    CoRR , volume =

    Elad Hazan , title =. CoRR , volume =. 2019 , url =. 1909.05207 , timestamp =

  3. [3]

    A Modern Introduction to Online Learning

    Francesco Orabona , title =. CoRR , volume =. 2019 , url =. 1912.13213 , timestamp =

  4. [4]

    Learning Adversarial

    Jin, Chi and Jin, Tiancheng and Luo, Haipeng and Sra, Suvrit and Yu, Tiancheng , booktitle =. Learning Adversarial. 2020 , editor =

  5. [5]

    Online Stochastic Shortest Path with Bandit Feedback and Unknown Transition Function , url =

    Rosenberg, Aviv and Mansour, Yishay , booktitle =. Online Stochastic Shortest Path with Bandit Feedback and Unknown Transition Function , url =

  6. [6]

    , biburl =

    Altman, E. , biburl =

  7. [7]

    Online Convex Optimization in Adversarial

    Rosenberg, Aviv and Mansour, Yishay , booktitle =. Online Convex Optimization in Adversarial. 2019 , editor =

  8. [8]

    Near-optimal Regret Bounds for Reinforcement Learning , url =

    Auer, Peter and Jaksch, Thomas and Ortner, Ronald , booktitle =. Near-optimal Regret Bounds for Reinforcement Learning , url =

  9. [9]

    International Conference on Machine Learning , pages=

    Minimax regret bounds for reinforcement learning , author=. International Conference on Machine Learning , pages=. 2017 , organization=

  10. [10]

    , title =

    Wei, Xiaohan and Yu, Hao and Neely, Michael J. , title =. Proc. ACM Meas. Anal. Comput. Syst. , month =. 2018 , issue_date =. doi:10.1145/3179415 , abstract =

  11. [11]

    Regret Minimization for Reinforcement Learning with Vectorial Feedback and Complex Objectives , volume =

    Cheung, Wang Chi , booktitle =. Regret Minimization for Reinforcement Learning with Vectorial Feedback and Complex Objectives , volume =

  12. [12]

    Model-Free Algorithm and Regret Analysis for MDPs with Long-Term Constraints , publisher =

    Model-Free Algorithm and Regret Analysis for MDPs with Long-Term Constraints , publisher =. 2020 , copyright =. doi:10.48550/ARXIV.2006.05961 , author =

  13. [13]

    arXiv preprint arXiv:2003.02189 , year=

    Exploration-Exploitation in Constrained MDPs , publisher =. 2020 , copyright =. doi:10.48550/ARXIV.2003.02189 , author =

  14. [14]

    Proceedings of the 2nd Conference on Learning for Dynamics and Control , pages =

    Constrained Upper Confidence Reinforcement Learning , author =. Proceedings of the 2nd Conference on Learning for Dynamics and Control , pages =. 2020 , editor =

  15. [15]

    Advances in Neural Information Processing Systems , volume=

    Constrained episodic reinforcement learning in concave-convex and knapsack settings , author=. Advances in Neural Information Processing Systems , volume=

  16. [16]

    Upper Confidence Primal-Dual Reinforcement Learning for CMDP with Adversarial Loss , url =

    Qiu, Shuang and Wei, Xiaohan and Yang, Zhuoran and Ye, Jieping and Wang, Zhaoran , booktitle =. Upper Confidence Primal-Dual Reinforcement Learning for CMDP with Adversarial Loss , url =

  17. [17]

    A Provably-Efficient Model-Free Algorithm for Constrained Markov Decision Processes , publisher =

    Wei, Honghao and Liu, Xin and Ying, Lei , keywords =. A Provably-Efficient Model-Free Algorithm for Constrained Markov Decision Processes , publisher =. 2021 , copyright =. doi:10.48550/ARXIV.2106.01577 , url =

  18. [18]

    Advances in Neural Information Processing Systems , volume=

    A unifying framework for online optimization with long-term constraints , author=. Advances in Neural Information Processing Systems , volume=

  19. [19]

    Tsitsiklis and Jia Yuan Yu , title =

    Shie Mannor and John N. Tsitsiklis and Jia Yuan Yu , title =. Journal of Machine Learning Research , year =

  20. [20]

    2006 , publisher=

    Prediction, learning, and games , author=. 2006 , publisher=

  21. [21]

    arXiv preprint arXiv:2003.05555 , year=

    Provably efficient model-free algorithm for MDPs with peak constraints , author=. arXiv preprint arXiv:2003.05555 , year=

  22. [22]

    International Conference on Machine Learning , pages=

    Cautious regret minimization: Online optimization with long-term budget constraints , author=. International Conference on Machine Learning , pages=. 2019 , organization=

  23. [23]

    Proceedings of the 39th International Conference on Machine Learning , pages =

    Online Learning with Knapsacks: the Best of Both Worlds , author =. Proceedings of the 39th International Conference on Machine Learning , pages =. 2022 , editor =

  24. [24]

    2018 , publisher=

    Reinforcement learning: An introduction , author=. 2018 , publisher=

  25. [25]

    2014 , publisher=

    Markov decision processes: discrete stochastic dynamic programming , author=. 2014 , publisher=

  26. [26]

    2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) , pages=

    Safe reinforcement learning for autonomous vehicles through parallel constrained policy optimization , author=. 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) , pages=. 2020 , organization=

  27. [27]

    2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages=

    Safe reinforcement learning on autonomous vehicles , author=. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages=. 2018 , organization=

  28. [28]

    Proceedings of the 27th ACM International Conference on Information and Knowledge Management , pages=

    Budget constrained bidding by model-free reinforcement learning in display advertising , author=. Proceedings of the 27th ACM International Conference on Information and Knowledge Management , pages=

  29. [29]

    Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining , pages=

    A Unified Solution to Constrained Bidding in Online Display Advertising , author=. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining , pages=

  30. [30]

    Proceedings of the FAccTRec Workshop, Online , pages=

    Building healthy recommendation sequences for everyone: A safe reinforcement learning approach , author=. Proceedings of the FAccTRec Workshop, Online , pages=

  31. [31]

    the Eighth Ad Auction Workshop , volume=

    Repeated auctions under budget constraints: Optimal bidding strategies and equilibria , author=. the Eighth Ad Auction Workshop , volume=. 2012 , organization=

  32. [32]

    Management Science , volume=

    Learning in repeated auctions with budgets: Regret minimization and equilibrium , author=. Management Science , volume=. 2019 , publisher=

  33. [33]

    Mathematics of Operations Research , volume=

    Online Markov decision processes , author=. Mathematics of Operations Research , volume=. 2009 , publisher=

  34. [34]

    Advances in Neural Information Processing Systems , volume=

    Online Markov decision processes under bandit feedback , author=. Advances in Neural Information Processing Systems , volume=

  35. [35]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Provably efficient primal-dual reinforcement learning for CMDPs with non-stationary objectives and constraints , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  36. [36]

    International Conference on Artificial Intelligence and Statistics , pages=

    Provably efficient model-free algorithms for non-stationary CMDPs , author=. International Conference on Artificial Intelligence and Statistics , pages=. 2023 , organization=

  37. [37]

    Advances in Neural Information Processing Systems , volume=

    No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions , author=. Advances in Neural Information Processing Systems , volume=

  38. [38]

    Conference On Learning Theory , pages=

    More adaptive algorithms for adversarial bandits , author=. Conference On Learning Theory , pages=. 2018 , organization=

  39. [39]

    Explore no more: Improved high-probability regret bounds for non-stochastic bandits , url =

    Neu, Gergely , booktitle =. Explore no more: Improved high-probability regret bounds for non-stochastic bandits , url =

  40. [40]

    Conference on Learning Theory , pages=

    Corralling a band of bandit algorithms , author=. Conference on Learning Theory , pages=. 2017 , organization=

  41. [41]

    arXiv preprint arXiv:2403.03672 , year=

    Learning Adversarial MDPs with Stochastic Hard Constraints , author=. arXiv preprint arXiv:2403.03672 , year=

  42. [42]

    International Conference on Algorithmic Learning Theory , pages=

    A model selection approach for corruption robust reinforcement learning , author=. International Conference on Algorithmic Learning Theory , pages=. 2022 , organization=

  43. [43]

    Conference on Learning Theory , pages=

    Corruption-robust exploration in episodic reinforcement learning , author=. Conference on Learning Theory , pages=. 2021 , organization=

  44. [44]

    Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining , pages=

    A unified solution to constrained bidding in online display advertising , author=. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining , pages=

  45. [45]

    , publisher =

    Altman, E. , publisher =

  46. [46]

    Proceedings of the 2nd Conference on Learning for Dynamics and Control , pages =

    Constrained Upper Confidence Reinforcement Learning , author =. Proceedings of the 2nd Conference on Learning for Dynamics and Control , pages =

  47. [47]

    Journal of Machine Learning Research , volume=

    Provably Sample-Efficient Model-Free Algorithm for MDPs with Peak Constraints , author=. Journal of Machine Learning Research , volume=

  48. [48]

    Advances in Neural Information Processing Systems , volume=

    Learning policies with zero or bounded constraint violation for constrained mdps , author=. Advances in Neural Information Processing Systems , volume=

  49. [49]

    arXiv preprint arXiv:2302.04375 , year=

    A Near-Optimal Algorithm for Safe Reinforcement Learning Under Instantaneous Hard Constraints , author=. arXiv preprint arXiv:2302.04375 , year=

  50. [50]

    , title =

    Wei, Xiaohan and Yu, Hao and Neely, Michael J. , title =. Proc. ACM Meas. Anal. Comput. Syst. , numpages =. 2018 , publisher =

  51. [51]

    Upper Confidence Primal-Dual Reinforcement Learning for CMDP with Adversarial Loss , volume =

    Qiu, Shuang and Wei, Xiaohan and Yang, Zhuoran and Ye, Jieping and Wang, Zhaoran , booktitle =. Upper Confidence Primal-Dual Reinforcement Learning for CMDP with Adversarial Loss , volume =

  52. [52]

    Online Learning in

    Stradi, Francesco Emanuele and Germano, Jacopo and Genalti, Gianmarco and Castiglioni, Matteo and Marchesi, Alberto and Gatti, Nicola , booktitle =. Online Learning in. 2024 , editor =

  53. [53]

    arXiv preprint arXiv:2410.02269 , year=

    Best-of-Both-Worlds Policy Optimization for CMDPs with Bandit Feedback , author=. arXiv preprint arXiv:2410.02269 , year=

  54. [54]

    International Conference on Machine Learning , pages=

    Improved corruption robust algorithms for episodic reinforcement learning , author=. International Conference on Machine Learning , pages=. 2021 , organization=

  55. [55]

    2024 , eprint=

    No-Regret is not enough! Bandits with General Constraints through Adaptive Regret Minimization , author=. 2024 , eprint=

  56. [56]

    Operations Research , volume=

    The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems , author=. Operations Research , volume=. 2023 , publisher=

  57. [57]

    Proceedings of the 41st International Conference on Machine Learning (ICML) , pages=

    Online Learning under Budget and ROI Constraints via Weak Adaptivity , author=. Proceedings of the 41st International Conference on Machine Learning (ICML) , pages=. 2024 , organization=

  58. [58]

    2012 , eprint=

    Trading Regret for Efficiency: Online Convex Optimization with Long Term Constraints , author=. 2012 , eprint=

  59. [59]

    Bandits with Knapsacks , url=

    Badanidiyuru, Ashwinkumar and Kleinberg, Robert and Slivkins, Aleksandrs , year=. Bandits with Knapsacks , url=. doi:10.1109/focs.2013.30 , booktitle=

  60. [60]

    2014 , eprint=

    Bandits with concave rewards and convex knapsacks , author=. 2014 , eprint=

  61. [61]

    2023 , eprint=

    Adversarial Bandits with Knapsacks , author=. 2023 , eprint=

  62. [62]

    Advances in Neural Information Processing Systems , volume=

    Online learning with sublinear best-action queries , author=. Advances in Neural Information Processing Systems , volume=

  63. [63]

    Finite-time Analysis of the Multiarmed Bandit Problem

    Auer, Peter and Cesa-Bianchi, Nicolò and Fischer, Paul , biburl =. Finite-time Analysis of the Multiarmed Bandit Problem. , url =. Mach. Learn. , keywords =

  64. [64]

    Bandit Algorithms , url =

    Lattimore, Tor and Szepesvari, Csaba , biburl =. Bandit Algorithms , url =

  65. [65]

    Communications of the ACM , volume=

    Algorithms with predictions , author=. Communications of the ACM , volume=. 2022 , publisher=

  66. [66]

    Online Learning with a Hint , url =

    Dekel, Ofer and flajolet, arthur and Haghtalab, Nika and Jaillet, Patrick , booktitle =. Online Learning with a Hint , url =

  67. [67]

    Advances in Neural Information Processing Systems , volume=

    Logarithmic regret from sublinear hints , author=. Advances in Neural Information Processing Systems , volume=

  68. [68]

    14th Innovations in Theoretical Computer Science Conference (ITCS 2023) , volume=

    Online Learning and Bandits with Queried Hints , author=. 14th Innovations in Theoretical Computer Science Conference (ITCS 2023) , volume=. 2023 , organization=

  69. [69]

    Advances in neural information processing systems , volume=

    From bandits to experts: On the value of side-observations , author=. Advances in neural information processing systems , volume=

  70. [70]

    Proceedings of the 19th international conference on World wide web , pages=

    A contextual-bandit approach to personalized news article recommendation , author=. Proceedings of the 19th international conference on World wide web , pages=

  71. [71]

    Statistical science: a review journal of the Institute of Mathematical Statistics , volume=

    Multi-armed bandit models for the optimal design of clinical trials: benefits and challenges , author=. Statistical science: a review journal of the Institute of Mathematical Statistics , volume=

  72. [72]

    Advances in neural information processing systems , volume=

    An empirical evaluation of thompson sampling , author=. Advances in neural information processing systems , volume=

  73. [73]

    SIAM journal on computing , volume=

    The nonstochastic multiarmed bandit problem , author=. SIAM journal on computing , volume=. 2002 , publisher=

  74. [74]

    The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

    Robust Contextual Pricing , author=. The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

  75. [75]

    44th Annual IEEE Symposium on Foundations of Computer Science, 2003

    The value of knowing a demand curve: bounds on regret for online posted-price auctions , author=. 44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings. , pages=. 2003 , organization=

  76. [76]

    arXiv preprint arXiv:2002.11650 , year=

    Corrupted multidimensional binary search: Learning in the presence of irrational agents , author=. arXiv preprint arXiv:2002.11650 , year=

  77. [77]

    Operations Research , volume=

    Contextual search in the presence of adversarial corruptions , author=. Operations Research , volume=. 2023 , publisher=

  78. [78]

    Cohen and Ilan Lobel and Renato Paes Leme , title =

    Maxime C. Cohen and Ilan Lobel and Renato Paes Leme , title =. Management Science , volume =. 2020 , doi =

  79. [79]

    Operations Research , volume =

    Ilan Lobel and Renato Paes Leme and Adrian Vladu , title =. Operations Research , volume =. 2018 , doi =

  80. [80]

    SIAM Journal on Computing , volume =

    Renato Paes Leme and Jon Schneider , title =. SIAM Journal on Computing , volume =. 2022 , doi =

Showing first 80 references.