pith. machine review for the scientific record. sign in

arxiv: 0911.5708 · v1 · submitted 2009-11-30 · 💻 cs.LG · cs.CR· cs.DB

Recognition: unknown

Learning in a Large Function Space: Privacy-Preserving Mechanisms for SVM Learning

Authors on Pith no claims yet
classification 💻 cs.LG cs.CRcs.DB
keywords learningmechanismdifferentialfeaturefunctionkernelmechanismsprivacy
0
0 comments X
read the original abstract

Several recent studies in privacy-preserving learning have considered the trade-off between utility or risk and the level of differential privacy guaranteed by mechanisms for statistical query processing. In this paper we study this trade-off in private Support Vector Machine (SVM) learning. We present two efficient mechanisms, one for the case of finite-dimensional feature mappings and one for potentially infinite-dimensional feature mappings with translation-invariant kernels. For the case of translation-invariant kernels, the proposed mechanism minimizes regularized empirical risk in a random Reproducing Kernel Hilbert Space whose kernel uniformly approximates the desired kernel with high probability. This technique, borrowed from large-scale learning, allows the mechanism to respond with a finite encoding of the classifier, even when the function class is of infinite VC dimension. Differential privacy is established using a proof technique from algorithmic stability. Utility--the mechanism's response function is pointwise epsilon-close to non-private SVM with probability 1-delta--is proven by appealing to the smoothness of regularized empirical risk minimization with respect to small perturbations to the feature mapping. We conclude with a lower bound on the optimal differential privacy of the SVM. This negative result states that for any delta, no mechanism can be simultaneously (epsilon,delta)-useful and beta-differentially private for small epsilon and small beta.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Distributed Deep Variational Approach for Privacy-preserving Data Release

    cs.CR 2026-05 unverdicted novelty 5.0

    GPP trains local variational encoders in federated settings to release representations that keep utility within 1% of an autoencoder baseline while driving adversary AUC on sensitive attributes to near-random levels o...