pith. machine review for the scientific record. sign in

arxiv: 1810.08575 · v1 · submitted 2018-10-19 · 💻 cs.LG · cs.AI· stat.ML

Recognition: unknown

Supervising strong learners by amplifying weak experts

Buck Shlegeris, Dario Amodei, Paul Christiano

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIstat.ML
keywords amplificationiteratedtrainingcomplexperformancesignalalgorithmicalternative
0
0 comments X
read the original abstract

Many real world learning tasks involve complex or hard-to-specify objectives, and using an easier-to-specify proxy can lead to poor performance or misaligned behavior. One solution is to have humans provide a training signal by demonstrating or judging performance, but this approach fails if the task is too complicated for a human to directly evaluate. We propose Iterated Amplification, an alternative training strategy which progressively builds up a training signal for difficult problems by combining solutions to easier subproblems. Iterated Amplification is closely related to Expert Iteration (Anthony et al., 2017; Silver et al., 2017), except that it uses no external reward function. We present results in algorithmic environments, showing that Iterated Amplification can efficiently learn complex behaviors.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 10 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. AI safety via debate

    stat.ML 2018-05 conditional novelty 8.0

    AI agents trained through competitive debate can allow polynomial-time human judges to oversee PSPACE-level questions, with MNIST experiments boosting sparse classifier accuracy from 59% to 89% using only 6 pixels.

  2. Curated Synthetic Data Doesn't Have to Collapse: A Theoretical Study of Generative Retraining with Pluralistic Preferences

    cs.LG 2026-05 unverdicted novelty 7.0

    Recursive generative retraining with pluralistic preferences converges to a stable diverse distribution that satisfies a weighted Nash bargaining solution.

  3. Fine-Tuning Language Models from Human Preferences

    cs.CL 2019-09 unverdicted novelty 7.0

    Language models fine-tuned via RL on 5k-60k human preference comparisons produce stylistically better text continuations and human-preferred summaries that sometimes copy input sentences.

  4. Automated alignment is harder than you think

    cs.AI 2026-05 unverdicted novelty 6.0

    Automating alignment research with AI agents risks generating hard-to-detect errors in fuzzy tasks, producing misleading safety evaluations even without deliberate sabotage.

  5. Automated alignment is harder than you think

    cs.AI 2026-05 unverdicted novelty 6.0

    Automating alignment research with AI agents risks undetected systematic errors in fuzzy tasks, producing overconfident but misleading safety evaluations that could enable deployment of misaligned AI.

  6. AI Alignment via Incentives and Correction

    cs.LG 2026-05 unverdicted novelty 6.0

    AI alignment is reframed as a fixed-point incentive problem in a solver-auditor pipeline, solved via bilevel optimization and bandit search over reward profiles to maintain monitoring and reduce hallucinations in LLM ...

  7. AI Alignment via Incentives and Correction

    cs.LG 2026-05 unverdicted novelty 6.0

    AI alignment is framed as inducing equilibrium behavior in a solver-auditor interaction via adaptive rewards found by bandit optimization, yielding improved oversight and reduced errors in LLM coding experiments.

  8. Improving alignment of dialogue agents via targeted human judgements

    cs.LG 2022-09 unverdicted novelty 6.0

    Sparrow uses targeted rule-based human feedback and evidence provision to outperform baselines in preference while violating rules only 8% of the time under adversarial probing.

  9. A General Language Assistant as a Laboratory for Alignment

    cs.CL 2021-12 conditional novelty 6.0

    Ranked preference modeling outperforms imitation learning for language model alignment and scales more favorably with model size.

  10. Extrapolating Volition with Recursive Information Markets

    cs.GT 2026-04 unverdicted novelty 5.0

    Recursive information markets with forgetful LLM buyers can align information prices with true value and extend to scalable oversight in AI alignment.