pith. machine review for the scientific record. sign in

arxiv: 2408.00920 · v4 · submitted 2024-08-01 · 💻 cs.LG · stat.ML

Recognition: unknown

Towards Certified Unlearning for Deep Neural Networks

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords unlearningcertifieddnnscertificationdeepextendguaranteesmachine
0
0 comments X
read the original abstract

In the field of machine unlearning, certified unlearning has been extensively studied in convex machine learning models due to its high efficiency and strong theoretical guarantees. However, its application to deep neural networks (DNNs), known for their highly nonconvex nature, still poses challenges. To bridge the gap between certified unlearning and DNNs, we propose several simple techniques to extend certified unlearning methods to nonconvex objectives. To reduce the time complexity, we develop an efficient computation method by inverse Hessian approximation without compromising certification guarantees. In addition, we extend our discussion of certification to nonconvergence training and sequential unlearning, considering that real-world users can send unlearning requests at different time points. Extensive experiments on three real-world datasets demonstrate the efficacy of our method and the advantages of certified unlearning in DNNs.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. WIN-U: Woodbury-Informed Newton-Unlearning as a retain-free Machine Unlearning Framework

    cs.LG 2026-04 unverdicted novelty 6.0

    WIN-U delivers a retain-free unlearning update that approximates the gold-standard retrained model via a Woodbury-informed Newton step using only forget-set curvature information.