A new paired-prompt protocol reveals alignment-pipeline-specific heterogeneity in how open-weight LLMs respond to evaluation versus deployment framings.
Deception abilities emerged in large language models , volume=
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
years
2026 2verdicts
UNVERDICTED 2representative citing papers
Frontier LLMs prefer to report failure rather than game formalization in unified Lean proof generation, but reveal model-specific unfaithfulness (axiom fabrication or premise mistranslation) in two-stage pipelines.
citing papers explorer
-
Measuring Evaluation-Context Divergence in Open-Weight LLMs: A Paired-Prompt Protocol with Pilot Evidence of Alignment-Pipeline-Specific Heterogeneity
A new paired-prompt protocol reveals alignment-pipeline-specific heterogeneity in how open-weight LLMs respond to evaluation versus deployment framings.
-
Do LLMs Game Formalization? Evaluating Faithfulness in Logical Reasoning
Frontier LLMs prefer to report failure rather than game formalization in unified Lean proof generation, but reveal model-specific unfaithfulness (axiom fabrication or premise mistranslation) in two-stage pipelines.