pith. machine review for the scientific record. sign in

arxiv: 2605.04317 · v1 · submitted 2026-05-05 · 🧮 math.ST · stat.TH

Recognition: unknown

The Threshold Breakdown Point

Authors on Pith no claims yet

Pith reviewed 2026-05-08 16:46 UTC · model grok-4.3

classification 🧮 math.ST stat.TH
keywords threshold breakdown pointm-sensitivityfinite sample robustnessM-estimatorsbreakdown analysishypothesis testingbootstrap inferencerobust statistics
0
0 comments X

The pith

The threshold breakdown point is the smallest contamination fraction that forces an estimator past a chosen deviation level.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces measures of finite-sample robustness that focus on practical levels of data contamination rather than total breakdown. The threshold breakdown point gives the minimal fraction of contaminated observations needed to push an estimator or test statistic beyond a user-specified deviation. The finite-sample m-sensitivity gives the largest deviation that can occur when exactly m observations are replaced by arbitrary values. These quantities are derived explicitly for common M-estimators, their standard errors, and associated test statistics, and an asymptotic theory with multiplier bootstrap is supplied for estimating them from data.

Core claim

We define the threshold breakdown point as the smallest contamination fraction needed to induce a prescribed deviation, and the finite sample m-sensitivity as the worst-case deviation that an estimator can incur after m observations are contaminated. We derive these measures for commonly used M-estimators, their standard errors and related test statistics. This allows us to extend the decision breakdown point to obtain general breakdown characterizations for hypothesis testing, and show how these notions correspond to finite sample counterparts of the power and level breakdown functions.

What carries the argument

The threshold breakdown point, which inverts the usual breakdown calculation by solving for the contamination fraction that produces a fixed deviation rather than solving for the deviation at a fixed contamination fraction.

If this is right

  • If the threshold breakdown point for a target deviation is high, the estimator or test remains stable under moderate contamination.
  • The m-sensitivity supplies a direct finite-sample bound on how far an estimator can move when a known number of points are altered.
  • The same machinery applies to standard errors and test statistics, giving breakdown characterizations for both point estimation and inference.
  • The multiplier bootstrap yields consistent and asymptotically normal estimates of the threshold breakdown point and m-sensitivity, enabling uncertainty quantification in applications.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • A plot of threshold breakdown points across a range of deviation levels would produce a graded robustness profile for any given estimator.
  • The two-sample testing application shows that these measures can be used to assess how contamination affects the actual decisions of a statistical test.
  • Because the measures are defined for finite samples, they offer a way to compare estimators on data sets of the size actually encountered in practice.

Load-bearing premise

The explicit derivations assume that the M-estimators satisfy regularity conditions permitting closed-form breakdown calculations and that the contamination model yields well-defined worst-case deviations.

What would settle it

A simulation that contaminates data at fractions both below and above the computed threshold breakdown point and checks whether the estimator's deviation crosses the prescribed level precisely at the computed fraction would confirm or refute the measure.

Figures

Figures reproduced from arXiv: 2605.04317 by Marco Avella Medina, Tianjun Ke.

Figure 1
Figure 1. Figure 1: The threshold breakdown points and m-sensitivity for the two-stage M-estimator under N (0, 1), Cauchy(0, 1), and Unif(0, 1) distributions with n = 1000, m ∈ {10, 20, . . . , 150}, and η ∈ {0.1, 0.2, . . . , 1}, using Huber’s loss for location and Huber’s proposal 2 for scale (Huber, 1964). The figure suggests a tail-dependent finite sample effect: when m is sufficiently large for contamination to interact … view at source ↗
Figure 2
Figure 2. Figure 2: Upper and lower bounds for the rejection breakdown point BPreject(ϕ, x(n) ) as a function of the effect size θ under N (θ, 1). Line types distinguish n ∈ {500, 1000, 2000}; the shaded band indicates the interval between the lower and upper bounds. the bound gap is smallest around weak signals, where the inequalities in Theorem 11 are empirically tightest for these procedures. The contrast between the score… view at source ↗
Figure 3
Figure 3. Figure 3: Sensitivity curves for the calcium–placebo blood pressure data. Panels (a)–(c) report the m-sensitivity ηm/n(θb x(nx) −θb y (ny) ) for Huber’s, logcosh, and self-concordant losses, respectively. The horizontal axis is the discrete level m/ min(nx, ny); at m/ min(nx, ny) = 0.5 complete breakdown occurs, matching the classical finite sample regime. Red markers (and the dashed line connecting them) are the co… view at source ↗
Figure 4
Figure 4. Figure 4: Standardized test statistic bands for the calcium-placebo blood pressure data. For each loss, we report with the light-blue bars, at level m/ min(nx, ny), the lower bound and upper bound given by Theorem 11 for (θbx˜ − θby˜)/se( b θbx˜, θby˜), where ˜x and ˜y denote the contaminated datasets at each level. Gray dashed lines are again one-sided thresholds z1−α. A bar lying entirely above a gray line certifi… view at source ↗
Figure 5
Figure 5. Figure 5: Loss functions (left) and the corresponding score functions (right) for Huber’s, logcosh, and self-concordant losses. The tuning parameters are set to δ = 1.345 for Huber, δ = 1.2047 for logcosh, and δ = 1.4811 for the self-concordant loss. 0 100 200 300 400 m 0 2 4 6 8 Normal Cauchy Uniform (a) Huber’s loss. 0 100 200 300 400 m 0 2 4 6 8 Normal Cauchy Uniform (b) Logcosh loss. 0 100 200 300 400 m 0 2 4 6 … view at source ↗
Figure 6
Figure 6. Figure 6: m-sensitivity ηm/n(θ, x b (n) ) for location M-estimators. Top panels: n = 1000 with m ∈ {20, 60, . . . , 460}, and line types distinguish data generating distributions. Bottom panels: m/n ∈ {0.02, 0.06, . . . , 0.46}, and line types distinguish sample sizes under N (0, 1). 42 view at source ↗
Figure 7
Figure 7. Figure 7: Gap between the upper and lower bounds on BPreject(ϕ, x(n) ) as a function of n, with θ = 1. Points are Monte Carlo replications; line and (almost invisible) band are a fitted regression and its 95% confidence band. Line types distinguish the generating distribution. 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 Effect Size 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Breakdown Point of Rejection 50 100 200 (a) Hube… view at source ↗
Figure 8
Figure 8. Figure 8: Upper and lower bounds for the rejection breakdown point BPreject(ϕ, x(nx) , y(ny) ) as a function of the effect size θ in the two-sample problem. Line types distinguish nx = ny ∈ {50, 100, 200}; the shaded band indicates the interval between the lower and upper bounds. 44 view at source ↗
Figure 9
Figure 9. Figure 9: Randomized PIT–based uniformity diagnostics for the multiplier bootstrap. Each panel corresponds to a different trimming proportion ε ∈ {0.03, 0.1, 0.15}, with n = 100, M = 1000, and B = 1000. Under a valid bootstrap approximation, we expect the distribution to be uniform. Panels show PP curves near y = x and histograms near the y = 1 baseline, with in-panel Kolmogorov– Smirnov statistics and p-values indi… view at source ↗
Figure 10
Figure 10. Figure 10: Randomized PIT–based uniformity diagnostics for the multiplier bootstrap. Each panel corresponds to a different trimming proportion ε ∈ {0.03, 0.1, 0.15}, with n = 1000, M = 3000, and B = 3000. The finite sample deviation decreases compared to previous plots. 45 view at source ↗
read the original abstract

We introduce a novel approach to finite sample robustness that avoids the pessimism of traditional breakdown analyses. We define the threshold breakdown point, the smallest contamination fraction needed to induce a prescribed deviation, and the finite sample m-sensitivity, the worst-case deviation that an estimator can incur after m observations are contaminated. We derive these measures for commonly used M-estimators, their standard errors and related test statistics. This allows us to extend the decision breakdown point of Zhang (1996) to obtain general breakdown characterizations for hypothesis testing, and show how these notions correspond to finite sample counterparts of the power and level breakdown functions of He, Simpson and Portnoy (1990). We complement our work with an inferential framework for the threshold breakdown and m-sensitivity that yields consistency and asymptotic normality results, as well as a valid multiplier bootstrap for uncertainty quantification. We illustrate the practical utility of our methods in various numerical examples and an application to a two sample testing problem for a blood pressure dataset.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces the threshold breakdown point (the smallest contamination fraction ε inducing a prescribed deviation δ in an estimator or test statistic) and the finite-sample m-sensitivity (the worst-case deviation after m contaminated observations). It derives explicit expressions for these quantities for standard M-estimators, their standard errors, and related test statistics; extends Zhang's (1996) decision breakdown point to hypothesis testing; relates the new measures to finite-sample analogs of the power and level breakdown functions of He, Simpson and Portnoy (1990); and supplies an inferential framework establishing consistency, asymptotic normality, and multiplier bootstrap validity, with numerical illustrations and an application to a two-sample blood-pressure test.

Significance. If the derivations and asymptotic results hold, the work supplies a targeted, non-pessimistic finite-sample robustness tool that complements classical breakdown analysis and enables practical inference on robustness measures. Explicit derivations for M-estimators together with the bootstrap procedure constitute concrete strengths that could support applied use in robust statistics.

major comments (2)
  1. [§4] §4 (inferential framework), Theorem on asymptotic normality: the delta-method and multiplier-bootstrap arguments for the threshold breakdown point estimator presuppose Hadamard differentiability of the inverse bias map ε ↦ bias(ε). For typical M-estimators the bias function is monotone but not everywhere strictly differentiable (flat regions or kinks occur at the median or Huber estimator), so the stated asymptotic normality and bootstrap validity do not follow from the given regularity conditions without additional smoothing or strict-monotonicity assumptions.
  2. [§3.2] §3.2 (derivations for test statistics): the extension of the decision breakdown point to hypothesis testing relies on the same inverse-bias construction; the same differentiability gap therefore propagates to the claimed consistency and normality results for the breakdown point of the test statistic.
minor comments (2)
  1. [Abstract] The abstract lists 'various numerical examples' without enumerating the estimators or sample sizes; a short list would improve readability.
  2. [§2] Notation for population versus sample versions of the threshold breakdown point and m-sensitivity is introduced in §2 but reused without consistent subscripting in later sections.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful reading and for identifying a potential gap in the regularity conditions underlying the asymptotic results. We address the two major comments point by point below.

read point-by-point responses
  1. Referee: [§4] §4 (inferential framework), Theorem on asymptotic normality: the delta-method and multiplier-bootstrap arguments for the threshold breakdown point estimator presuppose Hadamard differentiability of the inverse bias map ε ↦ bias(ε). For typical M-estimators the bias function is monotone but not everywhere strictly differentiable (flat regions or kinks occur at the median or Huber estimator), so the stated asymptotic normality and bootstrap validity do not follow from the given regularity conditions without additional smoothing or strict-monotonicity assumptions.

    Authors: We agree that the bias map for some M-estimators (e.g., the median or Huber) is not everywhere differentiable. The manuscript states the asymptotic normality and bootstrap results under the standing assumption that the inverse bias map is Hadamard differentiable at the relevant interior point. To make this explicit, we will add a short remark clarifying that the operating points considered in the examples lie away from kinks or flat regions, together with a mild local strict-monotonicity condition that guarantees the required differentiability. With this clarification the delta-method and multiplier-bootstrap arguments apply directly to the cases treated in the paper. revision: partial

  2. Referee: [§3.2] §3.2 (derivations for test statistics): the extension of the decision breakdown point to hypothesis testing relies on the same inverse-bias construction; the same differentiability gap therefore propagates to the claimed consistency and normality results for the breakdown point of the test statistic.

    Authors: The observation is correct: the breakdown-point results for test statistics are obtained by the identical inverse-bias construction. We will therefore carry the same local differentiability clarification and monotonicity condition into §3.2, ensuring that the consistency and asymptotic normality statements for the test-statistic breakdown points rest on the same (now explicitly stated) regularity assumptions as those for the estimators. revision: partial

Circularity Check

0 steps flagged

No significant circularity in definitions or derivations

full rationale

The threshold breakdown point is introduced by direct definition as the infimum contamination fraction inducing a prescribed deviation, and m-sensitivity as the worst-case deviation after m contaminations; both are then computed explicitly for M-estimators and statistics under stated regularity conditions. These constructions extend external prior results (Zhang 1996; He, Simpson and Portnoy 1990) without reducing to self-referential inputs or fitted quantities renamed as predictions. The subsequent consistency, asymptotic normality, and multiplier bootstrap claims are presented as standard asymptotic arguments applied to the defined quantities, not forced by the definitions themselves. No load-bearing self-citations, ansatz smuggling, or uniqueness theorems imported from the authors appear in the chain. The derivation remains self-contained.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claims rest on standard domain assumptions for M-estimators and asymptotic statistics; no free parameters or invented physical entities are described in the abstract.

axioms (1)
  • domain assumption M-estimators satisfy the regularity conditions required for explicit finite-sample breakdown calculations and asymptotic inference.
    The abstract states derivations for commonly used M-estimators and asymptotic results for the new measures.

pith-pipeline@v0.9.0 · 5455 in / 1275 out tokens · 56358 ms · 2026-05-08T16:46:37.496161+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

121 extracted references · 8 canonical work pages · 1 internal anchor

  1. [1]

    Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security , pages=

    Deep learning with differential privacy , author=. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security , pages=. 2016 , organization=

  2. [2]

    Bias robustness of depth estimators in multivariate settings

    Some insights into depth estimators for location and scatter in the multivariate setting , author=. arXiv preprint arXiv:2505.07383 , year=

  3. [3]

    Weighted

    Rosenbaum, Paul R , journal=. Weighted. 2014 , publisher=

  4. [4]

    Annals of Mathematical Statistics , volume=

    A general qualitative definition of robustness , author=. Annals of Mathematical Statistics , volume=. 1971 , publisher=

  5. [5]

    1986 , publisher=

    Robust Statistics: The Approach Based on Influence Functions , author=. 1986 , publisher=

  6. [6]

    Propose, test, re- lease: Differentially private estimation with high probability.arXiv preprint arXiv:2002.08774, 2020

    Propose, test, release: Differentially private estimation with high probability , author=. arXiv preprint arXiv:2002.08774 , year=

  7. [7]

    Symposium on Theory of Computing (STOC) , pages=

    Smooth sensitivity and sampling in private data analysis , author=. Symposium on Theory of Computing (STOC) , pages=

  8. [8]

    1968 , publisher=

    Contributions to the theory of robust estimation , author=. 1968 , publisher=

  9. [9]

    A simple two-sample

    Wang, Min and Liu, Guangying , journal=. A simple two-sample. 2016 , publisher=

  10. [10]

    Journal of the American Statistical Association , volume=

    Robust bounded-influence tests in general parametric models , author=. Journal of the American Statistical Association , volume=. 1994 , publisher=

  11. [11]

    Econometrica , volume=

    Estimation and inference with weak, semi-strong, and strong identification , author=. Econometrica , volume=. 2012 , publisher=

  12. [12]

    Biometrika , volume=

    Robust estimation of high-dimensional covariance and precision matrices , author=. Biometrika , volume=. 2018 , publisher=

  13. [13]

    International Conference on Machine Learning (ICML) , volume=

    Convergence rates for differentially private statistical estimation , author=. International Conference on Machine Learning (ICML) , volume=

  14. [14]

    Bootstrap consistency for general semiparametric

    Cheng, Guang and Huang, Jianhua Z , journal=. Bootstrap consistency for general semiparametric. 2010 , publisher=

  15. [15]

    Bernoulli (to appear) , year=

    On the breakdown point of transport-based quantiles , author=. Bernoulli (to appear) , year=

  16. [16]

    arXiv preprint, arXiv:2603.16005 , year=

    Breakdown properties of optimal transport maps: general transportation costs , author=. arXiv preprint, arXiv:2603.16005 , year=

  17. [17]

    A theoretical framework for

    Marusic, Juraj and Medina, Marco Avella and Rush, Cynthia , journal=. A theoretical framework for

  18. [18]

    Annals of Applied Probability (to appear) , year=

    On the robustness of semi-discrete optimal transport , author=. Annals of Applied Probability (to appear) , year=

  19. [19]

    Existence and breakdown analysis of

    Konen, Dimitri and Paindaveine, Davy , journal=. Existence and breakdown analysis of. 2025 , publisher=

  20. [20]

    Annales de l’Institut Henri Poincar

    On the robustness of spatial quantiles , author=. Annales de l’Institut Henri Poincar

  21. [21]

    Asymptotic behaviour of

    Davies, Laurie P , journal=. Asymptotic behaviour of. 1987 , publisher=

  22. [22]

    Journal of the American Statistical Association , volume=

    Least median of squares regression , author=. Journal of the American Statistical Association , volume=. 1984 , publisher=

  23. [23]

    Robust regression by means of

    Rousseeuw, Peter and Yohai, Victor , booktitle=. Robust regression by means of. 1984 , organization=

  24. [24]

    Annals of Statistics , pages=

    High breakdown-point and high efficiency robust estimates for regression , author=. Annals of Statistics , pages=. 1987 , publisher=

  25. [25]

    Annals of Statistics , volume=

    On robustness and local differential privacy , author=. Annals of Statistics , volume=. 2023 , publisher=

  26. [26]

    Privately estimating a

    Alabi, Daniel and Kothari, Pravesh K and Tankala, Pranay and Venkat, Prayaag and Zhang, Fred , booktitle=. Privately estimating a

  27. [27]

    Annals of Statistics , volume=

    Maximum bias curves for robust regression with non-elliptical regressors , author=. Annals of Statistics , volume=. 2001 , publisher=

  28. [28]

    Journal of the american statistical association , volume=

    Wald's test as applied to hypotheses in logit analysis , author=. Journal of the american statistical association , volume=. 1977 , publisher=

  29. [29]

    Symposium on Theory of Computing (STOC) , pages=

    Robustness implies privacy in statistical estimation , author=. Symposium on Theory of Computing (STOC) , pages=

  30. [30]

    Chance , volume=

    The role of robust statistics in private data analysis , author=. Chance , volume=. 2020 , publisher=

  31. [31]

    Journal of the American Statistical Association , volume=

    Privacy-preserving parametric inference: a case for robust statistics , author=. Journal of the American Statistical Association , volume=. 2021 , publisher=

  32. [32]

    Annals of Statistics , volume=

    Differentially private inference via noisy optimization , author=. Annals of Statistics , volume=. 2023 , publisher=

  33. [33]

    Journal of Computational and Graphical Statistics , pages=

    Differentially private significance tests for regression coefficients , author=. Journal of Computational and Graphical Statistics , pages=. 2019 , publisher=

  34. [34]

    2014 IEEE 55th Annual Symposium on Foundations of Computer Science , pages=

    Private empirical risk minimization: Efficient algorithms and tight error bounds , author=. 2014 IEEE 55th Annual Symposium on Foundations of Computer Science , pages=. 2014 , organization=

  35. [35]

    Journal of the American Statistical Association , volume=

    Prepivoting test statistics: a bootstrap view of asymptotic refinements , author=. Journal of the American Statistical Association , volume=. 1988 , publisher=

  36. [36]

    Transactions of the american mathematical society , volume=

    The accuracy of the Gaussian approximation to the sum of independent variates , author=. Transactions of the american mathematical society , volume=. 1941 , publisher=

  37. [37]

    Advances in Neural Information Processing Systems (NeurIPS) , volume=

    Covariance-aware private mean estimation without private covariance estimation , author=. Advances in Neural Information Processing Systems (NeurIPS) , volume=

  38. [38]

    IEEE Transactions on Information Theory , volume=

    Bandits with heavy tail , author=. IEEE Transactions on Information Theory , volume=. 2013 , publisher=

  39. [39]

    Advances in Neural Information Processing Systems (NeruIPS) , volume=

    Private hypothesis selection , author=. Advances in Neural Information Processing Systems (NeruIPS) , volume=

  40. [40]

    arXiv preprint arXiv:1902.04495 , year=

    The cost of privacy: optimal rates of convergence for paramer estimaion with differential privacy , author=. arXiv preprint arXiv:1902.04495 , year=

  41. [41]

    Annales de l'IHP Probabilit

    Challenging the empirical mean and empirical variance: a deviation study , author=. Annales de l'IHP Probabilit

  42. [42]

    Journal of Machine Learning Research , volume=

    Differentially private empirical risk minimization , author=. Journal of Machine Learning Research , volume=

  43. [43]

    International Conference on Machine Learning (ICML) , year=

    Convergence rates for differentially private statistical estimation , author=. International Conference on Machine Learning (ICML) , year=

  44. [44]

    The Annals of Statistics , volume=

    Robust covariance and scatter matrix estimation under Huber’s contamination model , author=. The Annals of Statistics , volume=. 2018 , publisher=

  45. [45]

    Journal of Nonparametric Statistics , volume=

    Maxbias curves of robust location estimators based on subranges , author=. Journal of Nonparametric Statistics , volume=. 2002 , publisher=

  46. [46]

    Journal of Econometrics , volume=

    The power of bootstrap and asymptotic tests , author=. Journal of Econometrics , volume=. 2006 , publisher=

  47. [47]

    The Annals of Statistics , volume=

    Breakdown and groups , author=. The Annals of Statistics , volume=

  48. [48]

    REVSTAT-Statistical Journal , volume=

    The breakdown point—examples and counterexamples , author=. REVSTAT-Statistical Journal , volume=

  49. [49]

    arXiv preprint arXiv:2011.14999 , year=

    An automatic finite-sample robustness metric: when can dropping a little data make a big difference? , author=. arXiv preprint arXiv:2011.14999 , year=

  50. [50]

    International Conference on Artificial Intelligence and Statistics (AISTATS) , pages=

    Influence diagnostics under self-concordance , author=. International Conference on Artificial Intelligence and Statistics (AISTATS) , pages=. 2023 , organization=

  51. [51]

    Advances in neural information processing systems (NeurIPS) , volume=

    On the accuracy of influence functions for measuring group effects , author=. Advances in neural information processing systems (NeurIPS) , volume=

  52. [52]

    Advances in Neural Information Processing Systems (NeurIPS) , volume=

    Most influential subset selection: Challenges, promises, and beyond , author=. Advances in Neural Information Processing Systems (NeurIPS) , volume=

  53. [53]

    Statistical Science , volume=

    Robustness by reweighting for kernel estimators: an overview , author=. Statistical Science , volume=. 2021 , publisher=

  54. [54]

    The Annals of Statistics , volume=

    Sub-Gaussian mean estimators , author=. The Annals of Statistics , volume=. 2016 , publisher=

  55. [55]

    SIAM Journal on Computing , volume=

    Robust estimators in high-dimensions without the computational intractability , author=. SIAM Journal on Computing , volume=. 2019 , publisher=

  56. [56]

    1997 , publisher=

    Evaluating density forecasts , author=. 1997 , publisher=

  57. [57]

    A Festschrift for Erich L

    The notion of breakdown point , author=. A Festschrift for Erich L. Lehmann , year=

  58. [58]

    Journal of the American Statistical Association , volume=

    Minimax optimal procedures for locally private estimation , author=. Journal of the American Statistical Association , volume=. 2018 , publisher=

  59. [59]

    , author=

    Differential privacy and robust statistics. , author=. Symposium on Theory of Computing (STOC) , volume=

  60. [60]

    Theory of cryptography conference , pages=

    Calibrating noise to sensitivity in private data analysis , author=. Theory of cryptography conference , pages=. 2006 , organization=

  61. [61]

    Foundations and Trends

    The algorithmic foundations of differential privacy , author=. Foundations and Trends. 2014 , publisher=

  62. [62]

    ICML'16 Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48 , year=

    Differentially private chi-squared hypothesis testing: Goodness of fit and independence testing , author=. ICML'16 Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48 , year=

  63. [63]

    Advances in Neural Information Processing Systems (NeurIPS) , volume=

    Privacy induces robustness: Information-computation gaps and sparse mean estimation , author=. Advances in Neural Information Processing Systems (NeurIPS) , volume=

  64. [64]

    , title =

    Hampel, Frank R. , title =. Journal of the American Statistical Association , year =

  65. [65]

    Journal of the American Statistical Association , volume=

    Breakdown robustness of tests , author=. Journal of the American Statistical Association , volume=. 1990 , publisher=

  66. [66]

    Symposium on Theory of Computing (STOC) , pages=

    Efficient mean estimation with pure differential privacy via a sum-of-squares exponential mechanism , author=. Symposium on Theory of Computing (STOC) , pages=

  67. [67]

    The Journal of Machine Learning Research , volume=

    Loss minimization and parameter estimation with heavy tails , author=. The Journal of Machine Learning Research , volume=. 2016 , publisher=

  68. [68]

    1994 , publisher=

    Robust Asymptotic Statistics , author=. 1994 , publisher=

  69. [69]

    1982 , publisher=

    Robust testing in linear models: the infinitesimal approach , author=. 1982 , publisher=

  70. [70]

    and Ronchetti, Elvezio , Edition =

    Huber, Peter J. and Ronchetti, Elvezio , Edition =. Robust Statistics , Year =

  71. [71]

    Annals of Mathematical Statistics , volume=

    Robust Estimation of a Location Parameter , author=. Annals of Mathematical Statistics , volume=. 1964 , publisher=

  72. [72]

    The Annals of Statistics , volume=

    Finite Sample Breakdown of M -and P -Estimators , author=. The Annals of Statistics , volume=. 1984 , publisher=

  73. [73]

    Tukey's contributions to robust statistics , author=

    John W. Tukey's contributions to robust statistics , author=. Annals of statistics , pages=. 2002 , publisher=

  74. [74]

    Biometrika , volume=

    A simple resampling method by perturbing the minimand , author=. Biometrika , volume=. 2001 , publisher=

  75. [75]

    Conference on Learning Theory (COLT) , pages=

    Private mean estimation of heavy-tailed distributions , author=. Conference on Learning Theory (COLT) , pages=

  76. [76]

    The Annals of Statistics , volume=

    Inference using noisy degrees: Differentially private beta-model and synthetic graphs , author=. The Annals of Statistics , volume=. 2016 , publisher=

  77. [77]

    Statistical Science , volume=

    User-friendly covariance estimation for heavy-tailed distributions , author=. Statistical Science , volume=. 2019 , publisher=

  78. [78]

    Working paper , year=

    The threshold breakdown point , author=. Working paper , year=

  79. [79]

    Conference on Learning Theory , pages=

    Private convex empirical risk minimization and high-dimensional regression , author=. Conference on Learning Theory , pages=

  80. [80]

    2008 , publisher=

    Introduction to empirical processes and semiparametric inference , author=. 2008 , publisher=

Showing first 80 references.