pith. machine review for the scientific record. sign in

arxiv: 2601.19487 · v2 · submitted 2026-01-27 · 💻 cs.LG · cs.AI

Recognition: unknown

LLM-VA: Resolving the Jailbreak-Overrefusal Trade-off via Vector Alignment

Authors on Pith no claims yet
classification 💻 cs.LG cs.AI
keywords answervectorllm-vallmssafetyvectorsalignsbenign
0
0 comments X
read the original abstract

Safety-aligned LLMs suffer from two failure modes: jailbreak (answering harmful inputs) and over-refusal (declining benign queries). Existing vector steering methods adjust the magnitude of answer vectors, but this creates a fundamental trade-off -- reducing jailbreak increases over-refusal and vice versa. We identify the root cause: LLMs encode the decision to answer (answer vector $v_a$) and the judgment of input safety (benign vector $v_b$) as nearly orthogonal directions, treating them as independent processes. We propose LLM-VA, which aligns $v_a$ with $v_b$ through closed-form weight updates, making the model's willingness to answer causally dependent on its safety assessment -- without fine-tuning or architectural changes. Our method identifies vectors at each layer using SVMs, selects safety-relevant layers, and iteratively aligns vectors via minimum-norm weight modifications. Experiments on 12 LLMs demonstrate that LLM-VA achieves 11.45% higher F1 than the best baseline while preserving 95.92% utility, and automatically adapts to each model's safety bias without manual tuning. Code and models are available at https://hotbento.github.io/LLM-VA-Web/.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Permit: Permission-Aware Representation Intervention for Controlled Generation in Large Language Models

    cs.CR 2026-05 unverdicted novelty 6.0

    Permit identifies a permission-sensitive subspace in LLM hidden states and applies lightweight offset or gated interventions to enforce fine-grained generation control, outperforming prior methods with over 18% F1 gai...