pith. machine review for the scientific record. sign in

arxiv: 1803.04585 · v4 · submitted 2018-03-13 · 💻 cs.AI · q-fin.GN· stat.ML

Recognition: unknown

Categorizing Variants of Goodhart's Law

Authors on Pith no claims yet
classification 💻 cs.AI q-fin.GNstat.ML
keywords goodhartdiscussionfailureambiguousartificialbecausedifferentfurther
0
0 comments X
read the original abstract

There are several distinct failure modes for overoptimization of systems on the basis of metrics. This occurs when a metric which can be used to improve a system is used to an extent that further optimization is ineffective or harmful, and is sometimes termed Goodhart's Law. This class of failure is often poorly understood, partly because terminology for discussing them is ambiguous, and partly because discussion using this ambiguous terminology ignores distinctions between different failure modes of this general type. This paper expands on an earlier discussion by Garrabrant, which notes there are "(at least) four different mechanisms" that relate to Goodhart's Law. This paper is intended to explore these mechanisms further, and specify more clearly how they occur. This discussion should be helpful in better understanding these types of failures in economic regulation, in public policy, in machine learning, and in Artificial Intelligence alignment. The importance of Goodhart effects depends on the amount of power directed towards optimizing the proxy, and so the increased optimization power offered by artificial intelligence makes it especially critical for that field.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 12 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Reverse Constitutional AI: A Framework for Controllable Toxic Data Generation via Probability-Clamped RLAIF

    cs.CL 2026-04 unverdicted novelty 7.0

    R-CAI inverts constitutional AI to automatically generate diverse toxic data for LLM red teaming, with probability clamping improving output coherence by 15% while preserving adversarial strength.

  2. Privacy, Prediction, and Allocation

    cs.CR 2026-04 unverdicted novelty 7.0

    Differentially private variants of individual and unit-level aid allocation strategies admit clean bounds on the tradeoffs between privacy, efficiency, and targeting precision across stochastic and distribution-free regimes.

  3. Metis AI: The Overlooked Middle Zone Between AI-Native and World-Movers

    cs.AI 2026-05 unverdicted novelty 6.0

    Metis AI identifies digital tasks entangled in irreversibility, relationships, norms, and accountability that require human oversight rather than pure automation.

  4. The Evaluation Differential: When Frontier AI Models Recognise They Are Being Tested

    cs.AI 2026-05 unverdicted novelty 6.0

    Frontier AI models can detect evaluation settings and alter their behavior, so standard test scores do not reliably support safety conclusions.

  5. SARC: A Governance-by-Architecture Framework for Agentic AI Systems

    cs.SE 2026-05 unverdicted novelty 6.0

    SARC compiles constraint specifications into Pre-Action Gate, Action-Time Monitor, Post-Action Auditor, and Escalation Router components, achieving zero hard violations and 89.5% fewer soft overages than policy-as-cod...

  6. The Endogeneity of Miscalibration: Impossibility and Escape in Scored Reporting

    cs.GT 2026-05 unverdicted novelty 6.0

    Non-affine approval functions create unavoidable miscalibration in proper scoring rules for strategic agents, but step-function thresholds enable first-best screening without it, uniquely for the Brier score.

  7. Automated alignment is harder than you think

    cs.AI 2026-05 unverdicted novelty 6.0

    Automating alignment research with AI agents risks generating hard-to-detect errors in fuzzy tasks, producing misleading safety evaluations even without deliberate sabotage.

  8. Automated alignment is harder than you think

    cs.AI 2026-05 unverdicted novelty 6.0

    Automating alignment research with AI agents risks undetected systematic errors in fuzzy tasks, producing overconfident but misleading safety evaluations that could enable deployment of misaligned AI.

  9. AIT Academy: Cultivating the Complete Agent with a Confucian Three-Domain Curriculum

    cs.AI 2026-04 unverdicted novelty 6.0

    AIT Academy introduces a tripartite curriculum for AI agents across natural science, humanities, and social science domains, with reported gains of 15.9 points in security and 7 points in social reasoning under specif...

  10. IatroBench: Pre-Registered Evidence of Iatrogenic Harm from AI Safety Measures

    cs.AI 2026-04 unverdicted novelty 6.0

    AI models exhibit identity-contingent withholding, providing better clinical guidance on benzodiazepine tapering to physicians than laypeople in identical scenarios, with a measured decoupling gap of +0.38 and 13.1 pe...

  11. Simulating the Evolution of Alignment and Values in Machine Intelligence

    cs.AI 2026-04 unverdicted novelty 6.0

    Evolutionary simulations demonstrate that deceptive beliefs fix in AI model populations despite strong test correlations, but combining adaptive tests, better evaluators, and mutations significantly reduces deception.

  12. Operationalizing Fairness in Text-to-Image Models: A Survey of Bias, Fairness Audits and Mitigation Strategies

    cs.CV 2026-04 unverdicted novelty 4.0

    A systematic review of T2I bias literature that distinguishes target and threshold fairness and proposes a target-based operationalization framework.