Guardrail classifiers receive formal guarantees by certifying convex harmful regions in pre-activation space, exposing safety holes in three toxicity models despite high empirical scores.
AEGIS 2.0: A Diverse AI Safety Dataset and Risks Taxonomy for Alignment of LLM Guardrails
3 Pith papers cite this work. Polarity classification is still indexing.
years
2026 3verdicts
UNVERDICTED 3representative citing papers
Guardian-as-an-Advisor prepends risk labels and explanations from a guardian model to queries, improving LLM safety compliance and reducing over-refusal while adding minimal compute overhead.
TWGuard achieves +0.289 F1 improvement and 94.9% false-positive reduction for LLM safety guardrails in the Taiwan linguistic context compared to foundation models and baselines.
citing papers explorer
-
Beyond Red-Teaming: Formal Guarantees of LLM Guardrail Classifiers
Guardrail classifiers receive formal guarantees by certifying convex harmful regions in pre-activation space, exposing safety holes in three toxicity models despite high empirical scores.
-
Guardian-as-an-Advisor: Advancing Next-Generation Guardian Models for Trustworthy LLMs
Guardian-as-an-Advisor prepends risk labels and explanations from a guardian model to queries, improving LLM safety compliance and reducing over-refusal while adding minimal compute overhead.
-
TWGuard: A Case Study of LLM Safety Guardrails for Localized Linguistic Contexts
TWGuard achieves +0.289 F1 improvement and 94.9% false-positive reduction for LLM safety guardrails in the Taiwan linguistic context compared to foundation models and baselines.