Recognition: unknown
Machine Unlearning: A Comprehensive Survey
read the original abstract
As the right to be forgotten has been legislated worldwide, many studies attempt to design unlearning mechanisms to protect users' privacy when they want to leave machine learning service platforms. Specifically, machine unlearning is to make a trained model to remove the contribution of an erased subset of the training dataset. This survey aims to systematically classify a wide range of machine unlearning and discuss their differences, connections and open problems. We categorize current unlearning methods into four scenarios: centralized unlearning, distributed and irregular data unlearning, unlearning verification, and privacy and security issues in unlearning. Since centralized unlearning is the primary domain, we use two parts to introduce: firstly, we classify centralized unlearning into exact unlearning and approximate unlearning; secondly, we offer a detailed introduction to the techniques of these methods. Besides the centralized unlearning, we notice some studies about distributed and irregular data unlearning and introduce federated unlearning and graph unlearning as the two representative directions. After introducing unlearning methods, we review studies about unlearning verification. Moreover, we consider the privacy and security issues essential in machine unlearning and organize the latest related literature. Finally, we discuss the challenges of various unlearning scenarios and address the potential research directions.
This paper has not been read by Pith yet.
Forward citations
Cited by 5 Pith papers
-
Knowledge Beyond Language: Bridging the Gap in Multilingual Machine Unlearning Evaluation
New metrics KSS and KPS are introduced to evaluate multilingual machine unlearning quality and cross-language consistency in LLMs, addressing limitations of single-language evaluation protocols.
-
Class Unlearning via Depth-Aware Removal of Forget-Specific Directions
DAMP performs one-shot class unlearning by extracting and projecting out forget-specific residual directions at each network depth using class prototypes and a separability-derived scaling rule.
-
WIN-U: Woodbury-Informed Newton-Unlearning as a retain-free Machine Unlearning Framework
WIN-U delivers a retain-free unlearning update that approximates the gold-standard retrained model via a Woodbury-informed Newton step using only forget-set curvature information.
-
Not Every Subject Should Stay: Machine Unlearning for Noisy Engagement Recognition
Approximate subject-level unlearning recovers 89.3% and 92.5% of oracle performance gains on EngageNet and DAiSEE at roughly one-quarter the retraining cost in K=3 forget-set regimes.
-
What Security and Privacy Transparency Users Need from Consumer-Facing Generative AI
A qualitative study of 21 GenAI users finds that current S&P transparency is often seen as incomplete or untrustworthy, leading to proxy-based adoption and constrained use, with calls for independent evaluations and o...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.