pith. machine review for the scientific record. sign in

arxiv: 1802.06739 · v1 · submitted 2018-02-19 · 💻 cs.LG · cs.CR· stat.ML

Recognition: unknown

Differentially Private Generative Adversarial Network

Authors on Pith no claims yet
classification 💻 cs.LG cs.CRstat.ML
keywords datagenerativegansprivacyprivateadversarialdifferentiallydistribution
0
0 comments X
read the original abstract

Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Class-Aware Adaptive Differential Privacy in Deep Learning for Sensor-Based Fall Detection

    cs.CR 2026-05 unverdicted novelty 5.0

    CA-ADP adjusts differential privacy noise per mini-batch class composition to improve F-scores by 3.3-8.5% over standard DP on three fall-detection datasets while claiming formal (ε,δ) guarantees.

  2. Protecting and Preserving Protest Dynamics for Responsible Analysis

    cs.CV 2026-04 unverdicted novelty 5.0

    A responsible computing framework substitutes real protest imagery with labeled synthetic reproductions from conditional image synthesis to enable privacy-aware analysis of collective action patterns.