Second-order optimizers retain residual geometric memory in their state after unlearning that first-order metrics miss, and only controlled eigendecay perturbations fully erase it.
Making ai forget you: Data deletion in machine learning
4 Pith papers cite this work. Polarity classification is still indexing.
years
2026 4verdicts
UNVERDICTED 4representative citing papers
A separable expert architecture uses base models, LoRA adapters, and deletable per-user proxies to enable privacy-preserving personalization and deterministic unlearning in LLMs.
A modified SISA architecture with replay and gating achieves effective class removal from trained CNNs on image datasets while preserving accuracy and cutting retraining costs.
A lightweight sequential unlearning framework for LLMs achieves effective suppression of sensitive behaviors on a benchmark with minimal loss in accuracy and fluency.
citing papers explorer
-
Shape of Memory: a Geometric Analysis of Machine Unlearning in Second-Order Optimizers
Second-order optimizers retain residual geometric memory in their state after unlearning that first-order metrics miss, and only controlled eigendecay perturbations fully erase it.
-
Separable Expert Architecture: Toward Privacy-Preserving LLM Personalization via Composable Adapters and Deletable User Proxies
A separable expert architecture uses base models, LoRA adapters, and deletable per-user proxies to enable privacy-preserving personalization and deterministic unlearning in LLMs.
-
Machine Unlearning for Class Removal through SISA-based Deep Neural Network Architectures
A modified SISA architecture with replay and gating achieves effective class removal from trained CNNs on image datasets while preserving accuracy and cutting retraining costs.
-
Operationalising the Right to be Forgotten in LLMs: A Lightweight Sequential Unlearning Framework for Privacy-Aligned Deployment in Politically Sensitive Environments
A lightweight sequential unlearning framework for LLMs achieves effective suppression of sensitive behaviors on a benchmark with minimal loss in accuracy and fluency.