pith. machine review for the scientific record. sign in

arxiv: 2605.05905 · v1 · submitted 2026-05-07 · 💻 cs.LG · math.OC

Recognition: unknown

Quadratic Objective Perturbation: Curvature-Based Differential Privacy

Authors on Pith no claims yet

Pith reviewed 2026-05-09 15:58 UTC · model grok-4.3

classification 💻 cs.LG math.OC
keywords differential privacyobjective perturbationempirical risk minimizationstrong convexityinterpolation regimemachine learning optimizationprivacy mechanisms
0
0 comments X

The pith

Quadratic objective perturbation achieves differential privacy by adding random curvature to control sensitivity without bounding gradients.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper introduces Quadratic Objective Perturbation as an alternative to linear methods for making empirical risk minimization private. The new approach adds a random quadratic form to the objective so that its curvature alone induces strong convexity and bounds the solution's sensitivity to data changes. This removes the need for the bounded-gradient assumption that previous methods required, enabling privacy guarantees specifically in the interpolation regime. The work also shows that the guarantees survive approximate optimization and supplies excess-risk utility bounds along with an efficient solver.

Core claim

Perturbing the objective with a random positive-definite quadratic form whose eigenvalues dominate the loss Hessian produces a unique minimizer whose change under a single data-point swap is bounded by the inverse of the smallest eigenvalue of the perturbation; this bound directly yields (ε, δ)-differential privacy in the interpolation regime without any assumption that the loss gradients are bounded.

What carries the argument

The random quadratic perturbation, which adds the term ½xᵀAx with A drawn so its spectrum supplies both strong convexity and the sensitivity bound used for privacy.

If this is right

  • Privacy guarantees remain intact when the perturbed problem is solved only approximately.
  • The method supplies explicit bounds on empirical excess risk as a utility measure.
  • The perturbed problems can be solved efficiently by modern splitting schemes such as proximal or ADMM iterations.
  • Theoretical and numerical comparisons show advantages over linear objective perturbation precisely when interpolation holds.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same curvature idea might let differential privacy apply to over-parameterized models that naturally sit in the interpolation regime.
  • Stability engineered through the added quadratic term could replace data-dependent assumptions in other private-learning settings.
  • Adaptive choice of the quadratic matrix based on a rough estimate of the loss Hessian might further tighten the privacy-utility trade-off.

Load-bearing premise

The problem lies in the interpolation regime so the added quadratic curvature can dominate the loss and control sensitivity without any bound on the gradients.

What would settle it

For an interpolating loss whose gradients are unbounded, construct two adjacent datasets, apply the quadratic perturbation with a chosen minimum eigenvalue λ, and check whether the Euclidean distance between the two resulting minimizers exceeds 2/λ.

Figures

Figures reproduced from arXiv: 2605.05905 by Coralia Cartis, Daniel Cortild.

Figure 1
Figure 1. Figure 1: Optimal upper bound on expected loss obtained by minimizing the right-hand side of Corollary view at source ↗
Figure 2
Figure 2. Figure 2: Empirical risks plotted on a logarithmic scale. view at source ↗
read the original abstract

Objective perturbation is a standard mechanism in differentially private empirical risk minimization. In particular, Linear Objective Perturbation (LOP) enforces privacy by adding a random linear term, while strong convexity and stability are ensured by an additional deterministic quadratic term. However, this approach requires the strong assumption of bounded gradients of the loss function, which excludes many modern machine learning models. In this work, we introduce Quadratic Objective Perturbation (QOP), which perturbs the objective with a random quadratic form. This perturbation induces strong convexity and enforces stability of the problem through curvature, thereby enabling privacy and allowing sensitivity to be controlled through spectral properties of the perturbation rather than assumptions on the gradients. As a result, we obtain $(\varepsilon, \delta)$-differential privacy under weaker assumptions, in the interpolation regime. Furthermore, we extend the analysis to account for approximate solutions, showing that privacy guarantees are preserved under inexact solves. Additionally, we derive utility guarantees in terms of empirical excess risk, and provide a theoretical and numerical comparison to LOP, highlighting the advantages of curvature-based perturbations. Finally, we discuss algorithmic aspects and show that the resulting problems can be solved efficiently using modern splitting schemes.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The manuscript introduces Quadratic Objective Perturbation (QOP) for achieving differential privacy in empirical risk minimization. It perturbs the objective with a random quadratic form to induce strong convexity and control sensitivity via spectral properties, claiming (ε, δ)-DP under weaker assumptions than Linear Objective Perturbation (LOP), specifically without requiring bounded gradients on the loss, in the interpolation regime. The paper extends the analysis to approximate solutions, derives utility guarantees based on empirical excess risk, provides theoretical and numerical comparisons to LOP, and discusses efficient algorithmic solutions using splitting schemes.

Significance. If the central claims hold, this would be a meaningful contribution to differentially private machine learning by relaxing the bounded gradient assumption that restricts many existing methods. This could enable privacy-preserving training for models with unbounded gradients, such as certain neural networks or non-smooth losses. The inclusion of approximate solver analysis and utility bounds adds practical relevance, and the comparison to LOP helps position the method.

major comments (1)
  1. The assertion that QOP achieves privacy 'without any bound on the gradients of the loss function' (abstract) is load-bearing for the 'weaker assumptions' claim relative to LOP. However, the sensitivity bound for the minimizers relies on ||w*_D − w*_D'|| ≤ (1/λ) Lip(g), where g is the objective difference between neighboring datasets. If per-sample losses have unbounded gradients, Lip(g) can be unbounded even in the interpolation regime (which only guarantees zero loss at the minimum but does not restrict gradient growth away from it). This suggests the privacy proof may implicitly require a hidden regularity condition on the loss class, undermining the stated advantage. A concrete counterexample or explicit condition on the loss would be needed to substantiate the claim.
minor comments (1)
  1. The abstract mentions 'theoretical and numerical comparison to LOP' but does not specify the metrics (e.g., excess risk, runtime) or regimes where advantages are demonstrated; adding a brief pointer to the relevant section or table would improve clarity.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their careful reading and constructive feedback on our manuscript. We address the single major comment point-by-point below, providing the strongest honest defense of our claims while acknowledging where clarification is warranted. We will revise the manuscript accordingly to strengthen the presentation of assumptions.

read point-by-point responses
  1. Referee: The assertion that QOP achieves privacy 'without any bound on the gradients of the loss function' (abstract) is load-bearing for the 'weaker assumptions' claim relative to LOP. However, the sensitivity bound for the minimizers relies on ||w*_D − w*_D'|| ≤ (1/λ) Lip(g), where g is the objective difference between neighboring datasets. If per-sample losses have unbounded gradients, Lip(g) can be unbounded even in the interpolation regime (which only guarantees zero loss at the minimum but does not restrict gradient growth away from it). This suggests the privacy proof may implicitly require a hidden regularity condition on the loss class, undermining the stated advantage. A concrete counterexample or explicit condition on the loss would be needed to substantiate the claim.

    Authors: We appreciate this precise observation on the sensitivity analysis. The proof does employ a bound of the indicated form, with λ derived from the minimum eigenvalue of the random quadratic perturbation matrix. However, the core technical contribution is that sensitivity is controlled via the spectral properties (eigenvalue distribution) of the random quadratic rather than a fixed a priori bound on individual loss gradients. In the interpolation regime, the existence of an unperturbed minimizer with zero loss ensures that perturbed minimizers remain in a region where the effective curvature dominates gradient growth for the objective difference g; this allows the Lipschitz constant of g to be handled locally without requiring the global uniform bound on ||∇loss|| demanded by LOP. The condition is thus weaker: it requires only that g be Lipschitz (satisfied by standard convex losses under mild local regularity or bounded-data assumptions common in practice), not that gradients be uniformly bounded across all possible datasets. We will revise the abstract, introduction, and theorem statements to explicitly state this regularity condition on the loss class, add a short discussion contrasting it with LOP, and include a remark on the interpolation regime's role in controlling relevant regions. No counterexample is needed under the clarified assumption, as the method applies precisely when Lip(g) is finite. revision: yes

Circularity Check

0 steps flagged

No circularity: claims rest on explicit definitions of new perturbation

full rationale

The paper defines QOP by adding a random quadratic perturbation whose minimum eigenvalue controls strong convexity and sensitivity directly via spectral properties. Privacy bounds, stability in the interpolation regime, and utility guarantees are derived from this construction plus standard DP arguments, without any reduction of the output bounds to fitted parameters, self-citations, or renamed inputs. The abstract and description contain no equations or steps where a claimed result is equivalent to its own inputs by construction; the derivation remains self-contained against the stated assumptions.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 1 invented entities

The central claim rests on the new quadratic perturbation being able to replace gradient bounds with spectral control; this is a domain assumption about the optimization landscape in the interpolation regime rather than a standard mathematical axiom.

free parameters (1)
  • Quadratic perturbation strength / curvature parameter
    A parameter that must be chosen to achieve a target privacy level and to dominate the objective in the interpolation regime.
axioms (1)
  • domain assumption The added random quadratic term induces strong convexity and stability whose sensitivity is governed solely by its spectral properties when the model interpolates the data.
    This replaces the bounded-gradient assumption of linear objective perturbation and is invoked to obtain the privacy guarantee.
invented entities (1)
  • Random quadratic perturbation matrix no independent evidence
    purpose: To add curvature-based noise that enforces privacy and stability.
    New mechanism introduced by the paper; no independent evidence outside the analysis is provided.

pith-pipeline@v0.9.0 · 5499 in / 1436 out tokens · 55847 ms · 2026-05-09T15:58:06.659362+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

58 extracted references · 17 canonical work pages · 1 internal anchor

  1. [1]

    Brendan and Mironov, Ilya and Talwar, Kunal and Zhang, Li , year = 2016, month = oct, doi =

    Abadi, Martín and Chu, Andy and Goodfellow, Ian and McMahan, H. Brendan and Mironov, Ilya and Talwar, Kunal and Zhang, Li , year = 2016, month = oct, doi =. Deep. Proceedings of the 2016

  2. [2]

    Backward–Forward Algorithms for Structured Monotone Inclusions in

    Attouch, Hédy and Peypouquet, Juan and Redont, Patrick , year = 2018, month = jan, journal =. Backward–Forward Algorithms for Structured Monotone Inclusions in

  3. [3]

    Balle, Borja and Barthe, Gilles and Gaboardi, Marco , year = 2018, volume =. Privacy. Advances in

  4. [4]

    and Long, Philip M

    Bartlett, Peter L. and Long, Philip M. and Lugosi, Gábor and Tsigler, Alexander , year = 2020, month = dec, volume =. Benign. Proceedings of the

  5. [5]

    Differentially

    Bassily, Raef and Guzmán, Cristóbal and Menart, Michael , year = 2021, month = nov, number =. Differentially. doi:10.48550/arXiv.2107.05585 , archiveprefix =. 2107.05585 , primaryclass =

  6. [6]

    Bassily, Raef and Smith, Adam and Thakurta, Abhradeep , year = 2014, month = oct, doi =. Private. Proceedings of the 55th

  7. [7]

    Bassily, Raef and Feldman, Vitaly and Talwar, Kunal and Thakurta, Abhradeep Guha , year = 2019, month = sep, volume =. Private. Advances in

  8. [8]

    and Combettes, Patrick L

    Bauschke, Heinz H. and Combettes, Patrick L. , year = 2017, series =. Convex. doi:10.1007/978-3-319-48311-5 , isbn =

  9. [9]

    Reconciling Modern Machine Learning Practice and the Bias-Variance Trade-Off , booktitle =

    Belkin, Mikhail and Hsu, Daniel and Ma, Siyuan and Mandal, Soumik , year = 2019, month = aug, volume =. Reconciling Modern Machine Learning Practice and the Bias-Variance Trade-Off , booktitle =

  10. [10]

    , year = 2011, month = jul, journal =

    Chaudhuri, Kamalika and Monteleoni, Claire and Sarwate, Anand D. , year = 2011, month = jul, journal =. Differentially

  11. [11]

    Stochastic Krasnoselskii-Mann Iterations: Convergence without Uniformly Bounded Variance

    Cortild, Daniel and Cartis, Coralia , year = 2026, month = apr, number =. Stochastic. doi:10.48550/arXiv.2604.22581 , archiveprefix =. 2604.22581 , primaryclass =

  12. [12]

    Cyffers, Edwige and Bellet, Aurélien and Basu, Debabrota , year = 2023, month = jul, pages =. From. Proceedings of the 40th

  13. [13]

    and Szarek, Stanislaw J

    Davidson, Kenneth R. and Szarek, Stanislaw J. , year = 2001, month = jan, volume =. Local. Handbook of the

  14. [14]

    Davis, Damek and Yin, Wotao , year = 2017, month = dec, journal =. A

  15. [15]

    The Conjugate Gradient Algorithm on Well-Conditioned

    Deift, Percy and Trogdon, Thomas , year = 2021, month = mar, journal =. The Conjugate Gradient Algorithm on Well-Conditioned

  16. [16]

    Proceedings of the 2019

    Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina , editor =. Proceedings of the 2019

  17. [17]

    and Jordan, Michael I

    Duchi, John C. and Jordan, Michael I. and Wainwright, Martin J. , year = 2013, month = oct, doi =. Local Privacy and Statistical Minimax Rates , booktitle =

  18. [18]

    Dwork, Cynthia and Roth, Aaron , year = 2014, month = aug, journal =. The

  19. [19]

    2006 , isbn =

    Dwork, Cynthia , editor =. Differential. Automata,. doi:10.1007/11787006_1 , isbn =

  20. [20]

    Transactions of the American Mathematical Society , volume =

    Curvature Measures , author =. Transactions of the American Mathematical Society , volume =

  21. [21]

    Differentially

    Fukuchi, Kazuto and Tran, Quang Khai and Sakuma, Jun , year = 2017, doi =. Differentially. Proceedings of the

  22. [22]

    Implementing the

    Gao, Fuchang and Han, Lixing , year = 2012, month = jan, journal =. Implementing the

  23. [23]

    González, Tomás and Guzmán, Cristóbal and Paquette, Courtney , year = 2026, month = mar, journal =. Mirror

  24. [24]

    Gupta, A. K. and Nagar, D. K. , year = 2000, publisher =. Matrix. doi:10.1201/9780203749289 , isbn =

  25. [25]

    Universal

    Howard, Jeremy and Ruder, Sebastian , editor =. Universal. Proceedings of the 56th

  26. [26]

    and Shen, Yelong and Wallis, Phillip and

    Hu, Edward J. and Shen, Yelong and Wallis, Phillip and. International

  27. [27]

    doi:10.48550/arXiv.2601.21719 , archiveprefix =

    Hu, Yaxi and Düngler, Johanna and Schölkopf, Bernhard and Sanyal, Amartya , year = 2026, month = jan, number =. doi:10.48550/arXiv.2601.21719 , archiveprefix =. 2601.21719 , primaryclass =

  28. [28]

    and Tople, Shruti , year = 2019, month = dec, number =

    Hyland, Stephanie L. and Tople, Shruti , year = 2019, month = dec, number =. On the. doi:10.48550/arXiv.1912.02919 , archiveprefix =. 1912.02919 , primaryclass =

  29. [29]

    and Song, Dawn and Thakkar, Om and Thakurta, Abhradeep and Wang, Lun , year = 2019, month = may, doi =

    Iyengar, Roger and Near, Joseph P. and Song, Dawn and Thakkar, Om and Thakurta, Abhradeep and Wang, Lun , year = 2019, month = may, doi =. Towards. Proceedings of 2019

  30. [30]

    Proceedings of the 31st

    (. Proceedings of the 31st

  31. [31]

    Jiang, Wuxuan and Xie, Cong and Zhang, Zhihua , year = 2016, month = feb, volume =. Wishart. Proceedings of the

  32. [32]

    Proceedings of the 25th

    Private. Proceedings of the 25th

  33. [33]

    Differentially

    Kuru, Nurdan and Birbil, Ş İlker and Gurbuzbalaban, Mert and Yildirim, Sinan , year = 2022, month = jun, journal =. Differentially. doi:10.1137/20M1355847 , archiveprefix =. 2008.01989 , primaryclass =

  34. [34]

    , year = 2025, month = jun, number =

    Lev, Omri and Srinivasan, Vishwak and Shenfeld, Moshe and Ligett, Katrina and Sekhari, Ayush and Wilson, Ashia C. , year = 2025, month = jun, number =. The. doi:10.48550/arXiv.2505.24603 , archiveprefix =. 2505.24603 , primaryclass =

  35. [35]

    Sketched

    Li, Qiaobo and Chen, Zhijie and Banerjee, Arindam , year = 2025, publisher =. Sketched

  36. [36]

    Proceedings of the 2nd

    Output. Proceedings of the 2nd

  37. [37]

    Marchis, Laurentiu Andrei and Loh, Po-Ling , year = 2025, month = jun, number =. On the. doi:10.48550/arXiv.2506.03044 , archiveprefix =. 2506.03044 , primaryclass =

  38. [38]

    McSherry, K

    McSherry, Frank and Talwar, Kunal , year = 2007, month = oct, pages =. Mechanism. Proceedings of the 48th. doi:10.1109/FOCS.2007.66 , isbn =

  39. [39]

    Mukherjee, Amartya and Liu, Jun , year = 2025, month = nov, number =. Almost. doi:10.48550/arXiv.2511.16587 , archiveprefix =. 2511.16587 , primaryclass =

  40. [40]

    Narayanan, Arvind and Shmatikov, Vitaly , year = 2008, pages =. Robust. doi:10.1109/SP.2008.33 , isbn =

  41. [41]

    and Ravikumar, Pradeep and Wainwright, Martin J

    Negahban, Sahand N. and Ravikumar, Pradeep and Wainwright, Martin J. and Yu, Bin , year = 2012, month = nov, journal =. A

  42. [42]

    Sample Distribution Theory Using

    Negro, Luigi , year = 2021, month = dec, number =. Sample Distribution Theory Using. doi:10.48550/arXiv.2110.01441 , archiveprefix =. 2110.01441 , primaryclass =

  43. [43]

    , year = 2024, month = mar, journal =

    Negro, L. , year = 2024, month = mar, journal =. Sample Distribution Theory Using

  44. [44]

    The Geometry of Differential Privacy: The Sparse and Approximate Cases , shorttitle =

    Nikolov, Aleksandar and Talwar, Kunal and Zhang, Li , year = 2013, month = jun, series =. The Geometry of Differential Privacy: The Sparse and Approximate Cases , shorttitle =. Proceedings of the Forty-Fifth Annual. doi:10.1145/2488608.2488652 , isbn =

  45. [45]

    Proceedings of the 37th

    Improving the. Proceedings of the 37th

  46. [46]

    Privately Publishable Per-Instance Privacy , booktitle =

    Redberg, Rachel and Wang, Yu-Xiang , year = 2021, month = dec, isbn =. Privately Publishable Per-Instance Privacy , booktitle =

  47. [47]

    Non-Asymptotic

    Rudelson, Mark and Vershynin, Roman , year = 2011, month = jun, pages =. Non-Asymptotic. Proceedings of the. doi:10.1142/9789814324359_0111 , isbn =

  48. [48]

    Sheffet, Or , year = 2019, month = mar, pages =. Old. Proceedings of the 30th

  49. [49]

    Learning from

    Song, Shuang and Chaudhuri, Kamalika and Sarwate, Anand , year = 2015, month = feb, pages =. Learning from. Proceedings of the

  50. [50]

    , year = 2013, month = dec, doi =

    Song, Shuang and Chaudhuri, Kamalika and Sarwate, Anand D. , year = 2013, month = dec, doi =. Stochastic Gradient Descent with Differentially Private Updates , booktitle =

  51. [51]

    Journal of Soviet Mathematics , volume =

    Extremal Properties of Half-Spaces for Spherically Invariant Measures , author =. Journal of Soviet Mathematics , volume =

  52. [52]

    Vershynin, Roman , year = 2018, series =. High-

  53. [53]

    Compressed

    Xie, Antai and Ren, Xiaoqiang and Yi, Xinlei and Yang, Tao and Wang, Xiaofan , year = 2026, month = mar, number =. Compressed. doi:10.48550/arXiv.2603.21640 , archiveprefix =. 2603.21640 , primaryclass =

  54. [54]

    Distributed

    Yuan, Yang and He, Wangli , year = 2024, month = nov, pages =. Distributed

  55. [55]

    Stochastic

    Yurtsever, Alp and Vu, Bang Cong and Cevher, Volkan , year = 2016, volume =. Stochastic. Advances in

  56. [56]

    Yurtsever, Alp and Gu, Alex and Sra, Suvrit , year = 2021, volume =. Three. Advances in

  57. [57]

    Efficient

    Zhang, Jiaqi and Zheng, Kai and Mou, Wenlong and Wang, Liwei , year = 2017, month = aug, doi =. Efficient. Proceedings of the 26th

  58. [58]

    Functional Mechanism: Regression Analysis under Differential Privacy , shorttitle =

    Zhang, Jun and Zhang, Zhenjie and Xiao, Xiaokui and Yang, Yin and Winslett, Marianne , year = 2012, month = jul, volume =. Functional Mechanism: Regression Analysis under Differential Privacy , shorttitle =. Proceedings of the