pith. machine review for the scientific record. sign in

arxiv: 1812.00984 · v2 · submitted 2018-12-03 · 📊 stat.ML · cs.LG

Recognition: unknown

Protection Against Reconstruction and Its Applications in Private Federated Learning

Authors on Pith no claims yet
classification 📊 stat.ML cs.LG
keywords dataprivacylearninglocalprotectionsstatisticallarge-scaleprivate
0
0 comments X
read the original abstract

In large-scale statistical learning, data collection and model fitting are moving increasingly toward peripheral devices---phones, watches, fitness trackers---away from centralized data collection. Concomitant with this rise in decentralized data are increasing challenges of maintaining privacy while allowing enough information to fit accurate, useful statistical models. This motivates local notions of privacy---most significantly, local differential privacy, which provides strong protections against sensitive data disclosures---where data is obfuscated before a statistician or learner can even observe it, providing strong protections to individuals' data. Yet local privacy as traditionally employed may prove too stringent for practical use, especially in modern high-dimensional statistical and machine learning problems. Consequently, we revisit the types of disclosures and adversaries against which we provide protections, considering adversaries with limited prior information and ensuring that with high probability, ensuring they cannot reconstruct an individual's data within useful tolerances. By reconceptualizing these protections, we allow more useful data release---large privacy parameters in local differential privacy---and we design new (minimax) optimal locally differentially private mechanisms for statistical learning problems for \emph{all} privacy levels. We thus present practicable approaches to large-scale locally private model training that were previously impossible, showing theoretically and empirically that we can fit large-scale image classification and language models with little degradation in utility.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Shuffling-Aware Optimization for Private Vector Mean Estimation

    cs.LG 2026-04 unverdicted novelty 7.0

    Using the shuffle index, the authors formulate and solve an optimization problem for post-shuffle minimax-optimal unbiased mean estimation, yielding an asymptotically optimal mechanism whose privacy-utility tradeoff a...

  2. Enhanced Privacy and Communication Efficiency in Non-IID Federated Learning with Adaptive Quantization and Differential Privacy

    cs.CV 2026-04 unverdicted novelty 5.0

    Adaptive bit-length schedulers plus Laplacian DP in non-IID FL reduce communicated data by up to 52.64% on MNIST and 45% on CIFAR-10 while keeping competitive accuracy and privacy.