SafeAdapt certifies a Rashomon set of safe policies from demonstration data and projects updates from arbitrary RL algorithms onto it to guarantee preservation of safety on source tasks.
Title resolution pending
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
fields
cs.LG 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
The relative rankings of continual learning methods are not preserved across different fine-tuning regimes defined by trainable parameter depth.
citing papers explorer
-
SafeAdapt: Provably Safe Policy Updates in Deep Reinforcement Learning
SafeAdapt certifies a Rashomon set of safe policies from demonstration data and projects updates from arbitrary RL algorithms onto it to guarantee preservation of safety on source tasks.
-
Fine-Tuning Regimes Define Distinct Continual Learning Problems
The relative rankings of continual learning methods are not preserved across different fine-tuning regimes defined by trainable parameter depth.