pith. machine review for the scientific record. sign in

hub

Shadow alignment: The ease of subverting safely-aligned language models

10 Pith papers cite this work. Polarity classification is still indexing.

10 Pith papers citing it

hub tools

citation-role summary

background 1

citation-polarity summary

years

2026 8 2024 2

roles

background 1

polarities

support 1

representative citing papers

Few-Shot Truly Benign DPO Attack for Jailbreaking LLMs

cs.CR · 2026-05-09 · unverdicted · novelty 6.0

A truly benign DPO attack using 10 harmless preference pairs jailbreaks frontier LLMs by suppressing refusal behavior, achieving up to 81.73% attack success rate on GPT-4.1-nano at low cost.

Continual Safety Alignment via Gradient-Based Sample Selection

cs.LG · 2026-04-19 · unverdicted · novelty 6.0

Gradient-based selection that drops high-gradient samples during continual fine-tuning preserves safety alignment in LLMs better than standard fine-tuning while keeping task performance competitive.

citing papers explorer

Showing 10 of 10 citing papers.