Balanced Iteration Subsampling achieves stronger privacy amplification than Poisson subsampling in DP-SGD by eliminating participation variance while keeping uniform marginal participation.
arXiv preprint arXiv:2204.13650 , year=
4 Pith papers cite this work. Polarity classification is still indexing.
fields
cs.LG 4years
2026 4verdicts
UNVERDICTED 4representative citing papers
PACZero achieves zero mutual information privacy for LLM fine-tuning via sign-quantized zeroth-order gradients, delivering near-non-private accuracy on SST-2 and SQuAD at I=0.
FiBeR adds a closed-form filter-aware correction A(ω)σ_w² to the second-moment term for temporally filtered DP gradients, improving adaptive optimization performance.
DPrivBench shows that top LLMs handle basic differential privacy mechanisms but fail on advanced algorithms, exposing gaps in automated DP reasoning.
citing papers explorer
-
Less Random, More Private: What is the Optimal Subsampling Scheme for DP-SGD?
Balanced Iteration Subsampling achieves stronger privacy amplification than Poisson subsampling in DP-SGD by eliminating participation variance while keeping uniform marginal participation.
-
PACZero: PAC-Private Fine-Tuning of Language Models via Sign Quantization
PACZero achieves zero mutual information privacy for LLM fine-tuning via sign-quantized zeroth-order gradients, delivering near-non-private accuracy on SST-2 and SQuAD at I=0.
-
FIBER: A Differentially Private Optimizer with Filter-Aware Innovation Bias Correction
FiBeR adds a closed-form filter-aware correction A(ω)σ_w² to the second-moment term for temporally filtered DP gradients, improving adaptive optimization performance.
-
DPrivBench: Benchmarking LLMs' Reasoning for Differential Privacy
DPrivBench shows that top LLMs handle basic differential privacy mechanisms but fail on advanced algorithms, exposing gaps in automated DP reasoning.