Recognition: no theorem link
Sparse Discrete Laplace and Gaussian Mechanisms under Local Differential Privacy
Pith reviewed 2026-05-12 04:26 UTC · model grok-4.3
The pith
Sparse discrete Laplace and Gaussian mechanisms achieve local differential privacy with support cardinality as the key complexity parameter.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
For both the sparse discrete-Laplace and sparse Gaussian families with kernels w(x,y) = exp(-λ d(x,y)) and exp(-d(x,y)^2/(2σ²)) respectively, exact characterizations of pure and approximate LDP are given. Input-dependent supports are obtained for pure LDP when supports coincide, and privacy defects are expressed via support leakage and excess loss for approximate LDP. Radius-truncated specializations yield explicit privacy-sparsity tradeoffs in terms of support size s, with Gaussian showing quadratic radius dependence in overlap.
What carries the argument
Sparse channels of the form M(y|x) ∝ w(x,y) 1{y ∈ S(x)} with small input-dependent support S(x), specialized to discrete Laplace and Gaussian kernels on a metric space.
If this is right
- Choosing the smallest support size that meets the privacy target minimizes distortion while achieving the desired local DP.
- Nontrivial approximate local privacy requires a minimum support size; below it, the mechanism cannot satisfy the (ε,δ) constraint.
- For Gaussian kernels, the privacy-sparsity tradeoff is sharper due to quadratic dependence on support radius in the overlap term.
- Larger supports reduce support leakage but increase distortion, balancing at the minimal viable s.
Where Pith is reading between the lines
- These mechanisms could extend to other kernels or metrics where support truncation controls privacy loss similarly.
- If the metric d(x,y) has specific structure like in graphs or lattices, the minimal support size might admit closed-form expressions.
- Implementing such mechanisms in practice would require efficient ways to sample from the truncated distributions while maintaining the exact privacy guarantees.
- The identification of support cardinality suggests it as a universal design knob for sparse local privacy across different noise families.
Load-bearing premise
The analysis assumes the admissible output sets S(x) are small and can depend on the private input x, with a fixed distance metric d(x,y) for the radius-truncated cases.
What would settle it
If a radius-truncated sparse Gaussian mechanism with support size below the derived minimum fails to satisfy the target (ε,δ)-LDP, or if increasing support size beyond it unexpectedly improves both privacy and distortion simultaneously, the tradeoff claims would be falsified.
Figures
read the original abstract
We study sparse locally private channels of the form $M(y\mid x)\propto w(x,y) 1\{y\in S(x)\},$ where the admissible output set $S(x)$ is allowed to depend on the private input $x$ and is assumed to be small. Here, we consider the sparse discrete-Laplace family with kernel $w(x,y)=e^{-\lambda d(x,y)}$ and the sparse Gaussian family with kernel $w(x,y)=e^{-d(x,y)^2/(2\sigma^2)}$. For both families we give exact characterizations of pure and approximate local differential privacy. For pure $\varepsilon$-local differential privacy, we show that input-dependent sparse supports are obtained when all supports coincide. For $(\varepsilon,\delta)$-local differential privacy, we derive exact formulas for the privacy defect in terms of support leakage and excess privacy loss on the overlap region. We then specialize the analysis to radius-truncated sparse discrete-Laplace and radius-truncated sparse Gaussian mechanisms and obtain explicit privacy-sparsity tradeoffs in terms of the support size $s$. In particular, we show that nontrivial approximate local privacy requires a minimum support size, whereas larger supports reduce support leakage but increase distortion. For the Gaussian family, the overlap term exhibits an additional quadratic dependence on the support radius, which implies a sharper tradeoff between privacy and sparsity. These results identify the support cardinality as the intrinsic complexity parameter of the mechanism and yield an optimal design principle: choose the smallest support size that satisfies the target privacy constraint.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper examines sparse locally private mechanisms of the form M(y|x) ∝ w(x,y) 1{y ∈ S(x)} with input-dependent small supports S(x). It derives exact characterizations of pure ε-LDP and (ε,δ)-LDP for the sparse discrete Laplace family (kernel e^{-λ d(x,y)}) and sparse Gaussian family (kernel e^{-d(x,y)^2/(2σ^2)}). For pure LDP it shows that input-dependent supports arise only when all S(x) coincide. For approximate LDP it gives formulas for the privacy defect via support leakage and excess loss on overlaps. The analysis then specializes to radius-truncated supports to obtain explicit privacy-sparsity tradeoffs in terms of support cardinality s, showing a minimum s is needed for nontrivial approximate LDP and that larger s reduces leakage but increases distortion (with an extra quadratic term for the Gaussian case). The results identify s as the intrinsic complexity parameter and recommend choosing the smallest s meeting the target privacy level.
Significance. If the exact characterizations and tradeoffs hold, the work supplies a concrete design rule for communication-efficient LDP mechanisms by minimizing support size while controlling privacy loss, which is relevant for practical local privacy deployments. The explicit expressions for privacy defect in terms of leakage and overlap, together with the specialization yielding closed-form sparsity-privacy curves, constitute a useful technical contribution even if limited to the radius-truncated geometry.
major comments (3)
- [Abstract / specialization analysis] Abstract and the specialization paragraph: the claim that support cardinality s is 'the intrinsic complexity parameter' and that the optimal design is to 'choose the smallest support size' is load-bearing for the paper's main takeaway, yet these statements are obtained only after restricting to radius-truncated supports under a fixed metric d(x,y). The general characterizations (in terms of support leakage and excess loss on the overlap) do not by themselves establish that s alone determines the privacy-utility tradeoff for arbitrary shapes of S(x) with the same cardinality; other geometries could alter leakage or distortion for fixed s.
- [characterizations of pure and approximate LDP] The exact formulas for the privacy defect (support leakage plus excess loss on overlap) are presented as closed-form, but the manuscript provides no verification artifacts or proof sketches for the derivations of the pure-LDP coincidence condition or the (ε,δ) expressions; without these the central characterizations cannot be checked for hidden assumptions on the kernel or the metric d.
- [radius-truncated specialization] The weakest assumption noted in the analysis—that S(x) may depend on x and is small—is used throughout, yet the design principle of minimal s is derived only for the radius-truncated case; if the paper intends the result to apply more broadly, a counter-example or argument showing that other S(x) of size s cannot improve the tradeoff is needed to support the 'intrinsic' claim.
minor comments (2)
- [preliminaries] Notation for the admissible set S(x) and the distance d(x,y) should be introduced with a dedicated preliminary subsection to avoid ambiguity when supports vary with x.
- [Abstract] The abstract states 'nontrivial approximate local privacy requires a minimum support size' for the truncated case; a brief remark on whether this minimum is independent of the input domain size would clarify the scope.
Simulated Author's Rebuttal
We thank the referee for the thoughtful review and valuable suggestions. We address each major comment below, clarifying the scope of our results and committing to revisions that strengthen the presentation of the characterizations and specializations.
read point-by-point responses
-
Referee: [Abstract / specialization analysis] Abstract and the specialization paragraph: the claim that support cardinality s is 'the intrinsic complexity parameter' and that the optimal design is to 'choose the smallest support size' is load-bearing for the paper's main takeaway, yet these statements are obtained only after restricting to radius-truncated supports under a fixed metric d(x,y). The general characterizations (in terms of support leakage and excess loss on the overlap) do not by themselves establish that s alone determines the privacy-utility tradeoff for arbitrary shapes of S(x) with the same cardinality; other geometries could alter leakage or distortion for fixed s.
Authors: We agree with the referee that the identification of s as the intrinsic complexity parameter and the recommendation to choose the smallest support size are conclusions drawn specifically from the radius-truncated specialization. The general characterizations express the privacy defect in terms of support leakage and excess loss, which depend on the particular choice of the supports S(x). In the radius-truncated case, due to the uniform structure of metric balls, these quantities become functions of the cardinality s alone. We will revise the abstract and the relevant paragraphs to qualify these statements as applying to the radius-truncated sparse mechanisms. This revision will clarify that while s is key in this natural family, general supports may require case-by-case analysis. revision: yes
-
Referee: [characterizations of pure and approximate LDP] The exact formulas for the privacy defect (support leakage plus excess loss on overlap) are presented as closed-form, but the manuscript provides no verification artifacts or proof sketches for the derivations of the pure-LDP coincidence condition or the (ε,δ) expressions; without these the central characterizations cannot be checked for hidden assumptions on the kernel or the metric d.
Authors: The full proofs for the pure ε-LDP result (input-dependent supports only when S(x) are identical for all x) and the (ε,δ)-LDP privacy defect formulas are included in the appendix. To address this, we will incorporate concise proof sketches into the main body of the paper, outlining the key steps: starting from the definition of LDP, deriving the condition for pure privacy, and decomposing the approximate privacy loss into leakage from non-overlapping supports and excess loss within overlaps. These sketches will confirm there are no additional assumptions beyond the kernel definitions and the metric properties used. revision: yes
-
Referee: [radius-truncated specialization] The weakest assumption noted in the analysis—that S(x) may depend on x and is small—is used throughout, yet the design principle of minimal s is derived only for the radius-truncated case; if the paper intends the result to apply more broadly, a counter-example or argument showing that other S(x) of size s cannot improve the tradeoff is needed to support the 'intrinsic' claim.
Authors: The analysis uses the general form with input-dependent small supports, but the explicit tradeoff curves are obtained by specializing to radius-truncated supports, which are a canonical choice that minimizes the expected distortion for a given cardinality under the distance metric. We will add a discussion in the specialization section arguing that for a fixed metric, the ball-shaped supports optimize the concentration of probability mass, thereby providing the best privacy-sparsity curve among supports of size s. While an exhaustive comparison to all possible subsets is intractable, this geometric optimality supports the design rule in the contexts considered. We do not claim universality beyond this family. revision: partial
Circularity Check
No circularity: derivations rest on standard LDP definitions and explicit geometry
full rationale
The paper's characterizations of pure and approximate LDP follow directly from the definitions of local differential privacy applied to the given sparse kernel forms w(x,y) and indicator on S(x). The subsequent specialization to radius-truncated supports produces explicit formulas for privacy defect (support leakage plus excess loss on overlap) that are computed from the metric d and radius; the resulting privacy-sparsity tradeoffs and minimum-support-size statements are consequences of those formulas rather than presupposed inputs. Support cardinality s emerges as the complexity parameter only after these derivations, with no self-definitional loops, fitted parameters renamed as predictions, or load-bearing self-citations reducing the central claims to their own assumptions. The derivation chain remains self-contained against the external benchmarks of LDP definitions.
Axiom & Free-Parameter Ledger
free parameters (2)
- support size s
- kernel scale λ or σ
axioms (2)
- domain assumption The mechanism takes the normalized form M(y|x) ∝ w(x,y) 1{y ∈ S(x)} with w exponential in distance.
- standard math The distance function d(x,y) is a fixed metric on the domain.
Reference graph
Works this paper leans on
-
[1]
Calibrating noise to sensitivity in private data analysis,
C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” inTheory of cryptography conference. Springer, 2006, pp. 265–284
work page 2006
-
[2]
The algorithmic foundations of differential privacy,
C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,”Foundations and trends® in theoretical computer science, vol. 9, no. 3-4, pp. 211–487, 2014
work page 2014
-
[3]
Rappor: Randomized aggregatable privacy-preserving ordinal response,
´U. Erlingsson, V . Pihur, and A. Korolova, “Rappor: Randomized aggregatable privacy-preserving ordinal response,” inProceedings of the 2014 ACM SIGSAC conference on computer and communications security, 2014, pp. 1054–1067
work page 2014
-
[4]
Estimating sparse dis- crete distributions under privacy and communication constraints,
J. Acharya, P. Kairouz, Y . Liu, and Z. Sun, “Estimating sparse dis- crete distributions under privacy and communication constraints,” in Algorithmic Learning Theory. PMLR, 2021, pp. 79–98
work page 2021
-
[5]
Geo-indistinguishability: Differential privacy for location-based systems,
M. E. Andr ´es, N. E. Bordenabe, K. Chatzikokolakis, and C. Palamidessi, “Geo-indistinguishability: Differential privacy for location-based systems,” inProceedings of the 2013 ACM SIGSAC conference on Computer & communications security, 2013, pp. 901–914
work page 2013
-
[6]
Geometric noise for locally private counting queries,
L. Kacem and C. Palamidessi, “Geometric noise for locally private counting queries,” inProceedings of the 13th Workshop on Programming Languages and Analysis for Security, 2018, pp. 13–16
work page 2018
-
[7]
B. Balle and Y .-X. Wang, “Improving the gaussian mechanism for differential privacy: Analytical calibration and optimal denoising,” in International conference on machine learning. PMLR, 2018, pp. 394– 403
work page 2018
-
[8]
The discrete gaussian for differential privacy,
C. L. Canonne, G. Kamath, and T. Steinke, “The discrete gaussian for differential privacy,”Advances in Neural Information Processing Systems, vol. 33, pp. 15 676–15 688, 2020
work page 2020
-
[9]
Randomized response: A survey technique for eliminating evasive answer bias,
S. L. Warner, “Randomized response: A survey technique for eliminating evasive answer bias,”Journal of the American Statistical Association, vol. 60, no. 309, pp. 63–69, 1965
work page 1965
-
[10]
Local privacy and minimax bounds: Sharp rates for probability estimation,
J. C. Duchi, M. I. Jordan, and M. J. Wainwright, “Local privacy and minimax bounds: Sharp rates for probability estimation,” inAdvances in Neural Information Processing Systems, 2014
work page 2014
-
[11]
Minimax optimal procedures for locally private estimation,
——, “Minimax optimal procedures for locally private estimation,” Journal of the American Statistical Association, vol. 113, no. 521, pp. 182–201, 2018
work page 2018
-
[12]
Mechanism design via differential privacy,
F. McSherry and K. Talwar, “Mechanism design via differential privacy,” in48th Annual IEEE Symposium on Foundations of Computer Science (FOCS’07). IEEE, 2007, pp. 94–103
work page 2007
-
[13]
The optimal mechanism in differential privacy,
Q. Geng and P. Viswanath, “The optimal mechanism in differential privacy,” in2014 IEEE international symposium on information theory. IEEE, 2014, pp. 2371–2375
work page 2014
-
[14]
Extremal mechanisms for local differential privacy,
P. Kairouz, S. Oh, and P. Viswanath, “Extremal mechanisms for local differential privacy,”Advances in neural information processing systems, vol. 27, 2014
work page 2014
-
[15]
Discrete distribution estimation under local privacy,
P. Kairouz, K. Bonawitz, and D. Ramage, “Discrete distribution estimation under local privacy,” inInternational Conference on Machine Learning. PMLR, 2016, pp. 2436–2444
work page 2016
-
[16]
The skellam mechanism for differentially private federated learning,
N. Agarwal, P. Kairouz, and Z. Liu, “The skellam mechanism for differentially private federated learning,”Advances in neural information processing systems, vol. 34, pp. 5052–5064, 2021
work page 2021
-
[17]
The poisson binomial mechanism for secure and private federated learning,
W.-N. Chen, A. ¨Ozg¨ur, and P. Kairouz, “The poisson binomial mechanism for secure and private federated learning,”arXiv preprint arXiv:2207.09916, 2022
-
[18]
Utility-optimized local differential privacy mechanisms for distribution estimation,
T. Murakami, H. Hino, J. Sakuma, Y . Kawamoto, and C. Troncoso, “Utility-optimized local differential privacy mechanisms for distribution estimation,” inUSENIX Security Symposium, 2019
work page 2019
-
[19]
On the connection between the abs perturbation methodology and differential privacy,
P. Sadeghi and C.-H. Chien, “On the connection between the abs perturbation methodology and differential privacy,”Journal of Privacy and Confidentiality, vol. 14, no. 2, Jul. 2024. [Online]. Available: https://journalprivacyconfidentiality.org/index.php/jpc/article/view/859
work page 2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.