Differential privacy permits generation in the limit for any countable collection of languages but prohibits identification for collections with two languages having infinite intersection and finite difference; in stochastic settings, private identification is possible exactly when adversarial non-私
Rényi differential privacy
6 Pith papers cite this work. Polarity classification is still indexing.
citation-role summary
citation-polarity summary
years
2026 6roles
method 1polarities
use method 1representative citing papers
Asymmetric Langevin Unlearning uses public data to suppress unlearning noise costs by O(1/n_pub²), enabling practical mass unlearning with preserved utility under distribution mismatch.
PACZero achieves zero mutual information privacy for LLM fine-tuning via sign-quantized zeroth-order gradients, delivering near-non-private accuracy on SST-2 and SQuAD at I=0.
Tight closed-form bounds via Berry-Esseen show DP-SGD with random shuffling achieves near-ideal privacy (trade-off close to 1-a) for σ ≥ √(3/ln M) and large M, with δ linear in epochs restricting E to O(√M) and an asymptotic O(√E) δ under E = c_M²M.
FO-DP-SGD adds fractional-order memory to the private gradient release in DP-SGD, achieving better test accuracy on SVHN, CIFAR-10, and CIFAR-100 while using standard Rényi DP accounting with adjusted sensitivity βC.
CA-ADP adjusts differential privacy noise per mini-batch class composition to improve F-scores by 3.3-8.5% over standard DP on three fall-detection datasets while claiming formal (ε,δ) guarantees.
citing papers explorer
-
Differentially Private Language Generation and Identification in the Limit
Differential privacy permits generation in the limit for any countable collection of languages but prohibits identification for collections with two languages having infinite intersection and finite difference; in stochastic settings, private identification is possible exactly when adversarial non-私
-
Unlearning with Asymmetric Sources: Improved Unlearning-Utility Trade-off with Public Data
Asymmetric Langevin Unlearning uses public data to suppress unlearning noise costs by O(1/n_pub²), enabling practical mass unlearning with preserved utility under distribution mismatch.
-
PACZero: PAC-Private Fine-Tuning of Language Models via Sign Quantization
PACZero achieves zero mutual information privacy for LLM fine-tuning via sign-quantized zeroth-order gradients, delivering near-non-private accuracy on SST-2 and SQuAD at I=0.
-
Trade-off Functions for DP-SGD with Subsampling based on Random Shuffling: Tight Upper and Lower Bounds
Tight closed-form bounds via Berry-Esseen show DP-SGD with random shuffling achieves near-ideal privacy (trade-off close to 1-a) for σ ≥ √(3/ln M) and large M, with δ linear in epochs restricting E to O(√M) and an asymptotic O(√E) δ under E = c_M²M.
-
Deep Learning under Fractional-Order Differential Privacy
FO-DP-SGD adds fractional-order memory to the private gradient release in DP-SGD, achieving better test accuracy on SVHN, CIFAR-10, and CIFAR-100 while using standard Rényi DP accounting with adjusted sensitivity βC.
-
Class-Aware Adaptive Differential Privacy in Deep Learning for Sensor-Based Fall Detection
CA-ADP adjusts differential privacy noise per mini-batch class composition to improve F-scores by 3.3-8.5% over standard DP on three fall-detection datasets while claiming formal (ε,δ) guarantees.