pith. machine review for the scientific record. sign in

arxiv: 2107.03374 · v2 · submitted 2021-07-07 · 💻 cs.LG

Recognition: unknown

Evaluating Large Language Models Trained on Code

Alec Radford, Alethea Power, Alex Nichol, Alex Paino, Alex Ray, Andrew N. Carr, Ariel Herbert-Voss, Bob McGrew, Brooke Chan, Christopher Hesse, Clemens Winter, Dario Amodei, Dave Cummings, Elizabeth Barnes, Evan Morikawa, Felipe Petroski Such, Fotios Chantzis, Girish Sastry, Greg Brockman, Gretchen Krueger, Harri Edwards, Heewoo Jun, Heidy Khlaaf, Henrique Ponde de Oliveira Pinto, Igor Babuschkin, Ilya Sutskever, Jan Leike, Jared Kaplan, Jerry Tworek, Jie Tang, Josh Achiam, Katie Mayer, Lukasz Kaiser, Mark Chen, Matthew Knight, Matthias Plappert, Michael Petrov, Mikhail Pavlov, Miles Brundage, Mira Murati, Mohammad Bavarian, Nicholas Joseph, Nick Ryder, Nikolas Tezak, Pamela Mishkin, Peter Welinder, Philippe Tillet, Qiming Yuan, Raul Puri, Sam McCandlish, Scott Gray, Shantanu Jain, Suchir Balaji, Vedant Misra, William Hebgen Guss, William Saunders, Wojciech Zaremba, Yuri Burda

classification 💻 cs.LG
keywords modelcodesolvescodexdocstringsgithublanguageoperations
0
0 comments X
read the original abstract

We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 60 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

    cs.CL 2022-01 accept novelty 9.0

    Chain-of-thought prompting, by including intermediate reasoning steps in few-shot examples, elicits strong reasoning abilities in large language models on arithmetic, commonsense, and symbolic tasks.

  2. CIDR: A Large-Scale Industrial Source Code Dataset for Software Engineering Research

    cs.SE 2026-05 unverdicted novelty 8.0

    CIDR is a large-scale curated dataset of proprietary industrial source code repositories spanning 138 languages and 373 million lines of code, collected via formal agreements with industry partners.

  3. PDEAgent-Bench: A Multi-Metric, Multi-Library Benchmark for PDE Solver Generation

    cs.AI 2026-05 unverdicted novelty 8.0

    PDEAgent-Bench is the first multi-metric, multi-library benchmark for AI-generated PDE solvers, evaluating executability, numerical accuracy, and efficiency across DOLFINx, Firedrake, and deal.II.

  4. SimWorld Studio: Automatic Environment Generation with Evolving Coding Agent for Embodied Agent Learning

    cs.AI 2026-05 unverdicted novelty 8.0

    SimWorld Studio uses a self-evolving coding agent to generate adaptive 3D environments that improve embodied agent performance, with reported gains of 18 points over fixed environments in navigation tasks.

  5. PhysicianBench: Evaluating LLM Agents in Real-World EHR Environments

    cs.AI 2026-05 conditional novelty 8.0

    PhysicianBench is a new benchmark of 100 physician-reviewed, execution-grounded tasks in live EHR environments where the best LLM agent reaches only 46% success and open-source models reach 19%.

  6. Can Coding Agents Reproduce Findings in Computational Materials Science?

    cs.SE 2026-05 conditional novelty 8.0

    AutoMat benchmark shows current LLM coding agents achieve at most 54.1% success when reproducing computational materials science claims from papers.

  7. From Mirage to Grounding: Towards Reliable Multimodal Circuit-to-Verilog Code Generation

    cs.SE 2026-04 unverdicted novelty 8.0

    MLLMs exhibit a Mirage effect by bypassing circuit diagrams in favor of header semantics for Verilog generation; VeriGround with identifier anonymization and D-ORPO training reaches 46% Functional Pass@1 while refusin...

  8. SecGoal: A Benchmark for Security Goal Extraction and Formalization from Protocol Documents

    cs.CR 2026-04 unverdicted novelty 8.0

    The paper presents SecGoal, the first expert-annotated benchmark for security goal extraction from protocol documents, and demonstrates that fine-tuned 7B/9B parameter models achieve over 80% F1 score, outperforming l...

  9. StabilizerBench: A Benchmark for AI-Assisted Quantum Error Correction Circuit Synthesis

    quant-ph 2026-04 conditional novelty 8.0

    StabilizerBench is a new benchmark for evaluating AI agents on generating, optimizing, and making fault-tolerant stabilizer circuits for quantum error correction, with efficient verification and multi-tier scoring.

  10. Gradient-Based Program Synthesis with Neurally Interpreted Languages

    cs.LG 2026-04 unverdicted novelty 8.0

    NLI autonomously discovers a vocabulary of primitive operations and interprets variable-length programs via a neural executor, allowing end-to-end training and gradient-based test-time adaptation that outperforms prio...

  11. Autonomous Evolution of EDA Tools: Multi-Agent Self-Evolved ABC

    cs.AR 2026-04 unverdicted novelty 8.0

    LLM agents autonomously evolve the ABC logic synthesis tool by iteratively rewriting its source code to achieve better quality-of-results on standard benchmarks while preserving the original interface.

  12. FermiLink: A Unified Agent Framework for Multidomain Autonomous Scientific Simulations

    physics.chem-ph 2026-04 conditional novelty 8.0

    FermiLink is a unified AI agent framework that automates multidomain scientific simulations via separated package knowledge bases and a four-layer progressive disclosure mechanism, reproducing 56% of target figures in...

  13. Supply-Chain Poisoning Attacks Against LLM Coding Agent Skill Ecosystems

    cs.CR 2026-04 unverdicted novelty 8.0

    DDIPE poisons LLM agent skills by embedding malicious logic in documentation examples, achieving 11.6-33.5% bypass rates across frameworks while explicit attacks are blocked, with 2.5% evading detection.

  14. Large Language Diffusion Models

    cs.CL 2025-02 unverdicted novelty 8.0

    LLaDA is a scalable diffusion-based language model that matches autoregressive LLMs like LLaMA3 8B on tasks and surpasses GPT-4o on reversal poem completion.

  15. The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery

    cs.AI 2024-08 unverdicted novelty 8.0

    The AI Scientist framework enables LLMs to independently conduct the full scientific process from idea generation to paper writing and review, demonstrated across three ML subfields with papers costing under $15 each.

  16. LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding

    cs.CL 2023-08 unverdicted novelty 8.0

    LongBench is the first bilingual multi-task benchmark for long context understanding in LLMs, containing 21 datasets in 6 categories with average lengths of 6711 words (English) and 13386 characters (Chinese).

  17. Show Your Work: Scratchpads for Intermediate Computation with Language Models

    cs.LG 2021-11 unverdicted novelty 8.0

    Training language models to generate intermediate computation steps on a scratchpad enables them to perform multi-step tasks such as long addition and arbitrary program execution that they otherwise fail at.

  18. TruthfulQA: Measuring How Models Mimic Human Falsehoods

    cs.CL 2021-09 unverdicted novelty 8.0

    A new benchmark reveals that language models including GPT-3 are truthful on only 58% of questions designed to elicit popular misconceptions, far below human performance of 94%, with larger models performing worse.

  19. Combining On-Policy Optimization and Distillation for Long-Context Reasoning in Large Language Models

    cs.CL 2026-05 unverdicted novelty 7.0

    dGRPO merges outcome-based policy optimization with dense teacher guidance from on-policy distillation, yielding more stable long-context reasoning on the new LongBlocks synthetic dataset.

  20. From Noise to Diversity: Random Embedding Injection in LLM Reasoning

    cs.AI 2026-05 conditional novelty 7.0

    Random Soft Prompts (RSPs) sampled from the embedding distribution improve Pass@N on reasoning benchmarks by increasing early-stage token diversity without any training.

  21. StepCodeReasoner: Aligning Code Reasoning with Stepwise Execution Traces via Reinforcement Learning

    cs.SE 2026-05 unverdicted novelty 7.0

    StepCodeReasoner aligns code reasoning with verifiable stepwise execution traces via print anchors and bi-level GRPO reinforcement learning, reaching SOTA results on CRUXEval (91.1%) and LiveCodeBench (86.5%) for a 7B model.

  22. Block-R1: Rethinking the Role of Block Size in Multi-domain Reinforcement Learning for Diffusion Large Language Models

    cs.LG 2026-05 unverdicted novelty 7.0

    Block-R1 formulates domain block size conflicts in multi-domain RL for dLLMs, releases a 41K-sample dataset with per-sample best block sizes and a conflict score, and provides a benchmark plus simple cross-domain trai...

  23. Can a Single Message Paralyze the AI Infrastructure? The Rise of AbO-DDoS Attacks through Targeted Mobius Injection

    cs.CR 2026-05 unverdicted novelty 7.0

    Mobius Injection exploits semantic closure in LLM agents to enable single-message AbO-DDoS attacks achieving up to 51x call amplification and 229x latency inflation.

  24. Quantifying the Reconstructability of Astrophysical Methods with Large Language Models and Information Theory: A Case Study in Spectral Reconstruction

    astro-ph.IM 2026-05 unverdicted novelty 7.0

    LLMs prompted with increasing levels of text on TNO spectral reconstruction from photometry reveal an entropy floor where implementation variance persists, showing text alone cannot capture all tacit expert knowledge ...

  25. EVOCHAMBER: Test-Time Co-evolution of Multi-Agent System at Individual, Team, and Population Scales

    cs.AI 2026-05 unverdicted novelty 7.0

    EVOCHAMBER enables test-time co-evolution of multi-agent systems across three scales, producing emergent niche specialists and performance gains of up to 32% relative on math tasks with Qwen3-8B.

  26. ConQuR: Corner Aligned Activation Quantization via Optimized Rotations for LLMs

    cs.LG 2026-05 unverdicted novelty 7.0

    ConQuR is a post-training rotation calibration technique that aligns activations to hypercube corners via Procrustes optimization and online updates, delivering competitive LLM quantization performance without end-to-...

  27. Rebellious Student: Reversing Teacher Signals for Reasoning Exploration with Self-Distilled RLVR

    cs.LG 2026-05 unverdicted novelty 7.0

    RLRT augments GRPO by reinforcing tokens on correct student rollouts that the teacher would not have predicted, outperforming standard self-distillation and exploration baselines on Qwen3 models.

  28. SlimSpec: Low-Rank Draft LM-Head for Accelerated Speculative Decoding

    cs.LG 2026-05 unverdicted novelty 7.0

    SlimSpec replaces the standard LM-head in draft models with a low-rank version to deliver 4-5x faster speculative decoding while preserving full vocabulary and competitive acceptance rates.

  29. GraphInstruct: A Progressive Benchmark for Diagnosing Capability Gaps in LLM Graph Generation

    cs.SI 2026-05 unverdicted novelty 7.0

    GraphInstruct is a progressive benchmark with six complexity levels for LLM graph generation that identifies multi-constraint composition as the hardest point and shows a verification-guided iterative framework outper...

  30. Prospective Compression in Human Abstraction Learning

    cs.AI 2026-05 unverdicted novelty 7.0

    Humans exhibit abstraction learning consistent with prospective compression of future tasks in non-stationary domains, unlike retrospective compression algorithms or LLM-based approaches.

  31. Parameter-Efficient Neuroevolution for Diverse LLM Generation: Quality-Diversity Optimization via Prompt Embedding Evolution

    cs.NE 2026-05 unverdicted novelty 7.0

    QD-LLM evolves prompt embeddings via neuroevolution in a quality-diversity framework, delivering 46% higher coverage and 41% higher QD-score than prior methods on coding and writing benchmarks.

  32. Scratchpad Patching: Decoupling Compute from Patch Size in Byte-Level Language Models

    cs.CL 2026-05 conditional novelty 7.0

    Scratchpad Patching decouples compute from patch size in byte-level language models by inserting entropy-triggered scratchpads to update patch context dynamically.

  33. SmartEval: A Benchmark for Evaluating LLM-Generated Smart Contracts from Natural Language Specifications

    cs.MA 2026-05 unverdicted novelty 7.0

    SmartEval is a new benchmark showing LLM-generated smart contracts score 8.29 points higher than expert versions on average but frequently omit logic (35.3%) or mishandle state transitions (23.4%).

  34. TAD: Temporal-Aware Trajectory Self-Distillation for Fast and Accurate Diffusion LLM

    cs.CL 2026-05 unverdicted novelty 7.0

    TAD improves the accuracy-parallelism trade-off in diffusion LLMs via temporal-aware self-distillation that applies hard labels to soon-to-be-decoded tokens and soft supervision to future tokens.

  35. BadDLM: Backdooring Diffusion Language Models with Diverse Targets

    cs.CR 2026-05 unverdicted novelty 7.0

    BadDLM implants effective backdoors in diffusion language models across concept, attribute, alignment, and payload targets by exploiting denoising dynamics while preserving clean performance.

  36. Test-Time Speculation

    cs.CL 2026-05 unverdicted novelty 7.0

    Test-Time Speculation adapts draft models online via target-model verifications to sustain high acceptance lengths during long LLM generations.

  37. BoostAPR: Boosting Automated Program Repair via Execution-Grounded Reinforcement Learning with Dual Reward Models

    cs.AI 2026-05 unverdicted novelty 7.0

    BoostAPR improves automated program repair by using execution-grounded RL with a sequence-level assessor and line-level credit allocator, reaching 40.7% on SWE-bench Verified and strong cross-language results.

  38. Sketch-and-Verify: Structured Inference-Time Scaling via Program Sketching

    cs.LG 2026-05 conditional novelty 7.0

    Sketch-and-Verify improves small-LLM code generation on HumanEval+ by factorizing search into K algorithmic sketches and M fillings each, outperforming flat sampling by up to 32 percentage points at matched budget whi...

  39. LEAP: Unlocking dLLM Parallelism via Lookahead Early-Convergence Token Detection

    cs.LG 2026-05 unverdicted novelty 7.0

    LEAP detects early-converging tokens in dLLMs via future context filtering and multi-sequence superposition, reducing average denoising steps by about 30% while maintaining accuracy.

  40. NARRA-Gym for Evaluating Interactive Narrative Agents

    cs.CL 2026-05 unverdicted novelty 7.0

    NARRA-Gym is an executable benchmark that generates complete interactive narrative episodes from emotional seeds and logs full model trajectories to expose gaps in coherence, adaptation, and personalization that stati...

  41. CUDABeaver: Benchmarking LLM-Based Automated CUDA Debugging

    cs.LG 2026-05 unverdicted novelty 7.0

    CUDABeaver shows LLM CUDA debuggers often degenerate code for test-passing at the cost of speed, with protocol-aware metrics shifting success rates by up to 40 percentage points.

  42. DUET: Optimize Token-Budget Allocation for Reinforcement Learning with Verifiable Rewards

    cs.LG 2026-05 unverdicted novelty 7.0

    DUET improves RLVR by allocating tokens across both prompt selection and rollout length, outperforming full-budget baselines even when using only half the tokens.

  43. CoCoDA: Co-evolving Compositional DAG for Tool-Augmented Agents

    cs.AI 2026-05 unverdicted novelty 7.0

    CoCoDA co-evolves a typed compositional DAG of primitive and composite tools with the agent planner, using signature-based retrieval and a size-based reward to scale libraries efficiently and let an 8B model match or ...

  44. Fast Byte Latent Transformer

    cs.CL 2026-05 unverdicted novelty 7.0

    BLT-D, BLT-S, and BLT-DV use block-wise diffusion training and speculative verification to enable parallel byte generation in byte-level LMs, cutting memory-bandwidth cost by over 50%.

  45. Ask Early, Ask Late, Ask Right: When Does Clarification Timing Matter for Long-Horizon Agents?

    cs.CL 2026-05 unverdicted novelty 7.0

    Goal clarifications lose nearly all value after 10% of execution while input clarifications retain value until roughly 50%, and asking any type past mid-trajectory hurts performance more than never asking.

  46. Trajectory as the Teacher: Few-Step Discrete Flow Matching via Energy-Navigated Distillation

    cs.LG 2026-05 unverdicted novelty 7.0

    Energy-navigated trajectory shaping during training produces 8-step discrete flow matching students that achieve 32% lower perplexity than 1024-step teachers on 170M language models with unchanged inference cost.

  47. GameGen-Verifier: Parallel Keypoint-Based Verification for LLM-Generated Games via Runtime State Injection

    cs.LG 2026-05 unverdicted novelty 7.0

    GameGen-Verifier decomposes game specifications into keypoints, injects runtime states for targeted checks, and achieves 92.2% accuracy on 100 games while running up to 16.6x faster than agent-based baselines.

  48. Rethinking Importance Sampling in LLM Policy Optimization: A Cumulative Token Perspective

    cs.LG 2026-05 unverdicted novelty 7.0

    The cumulative token IS ratio gives unbiased prefix correction and lower variance than full-sequence ratios for token-level gradients in LLM policy optimization, enabling CTPO to outperform GRPO and GSPO baselines on ...

  49. Where to Spend Rollouts: Hit-Utility Optimal Rollout Allocation for Group-Based RLVR

    cs.LG 2026-05 unverdicted novelty 7.0

    HORA adaptively allocates rollouts using hit utility to improve Pass@K over compute-matched GRPO on math reasoning benchmarks while preserving Pass@1.

  50. Theoretical Limits of Language Model Alignment

    cs.LG 2026-05 unverdicted novelty 7.0

    The maximum reward gain under KL-regularized LM alignment is a Jeffreys divergence term, estimable as covariance from base samples, with best-of-N approaching the theoretical limit.

  51. IntentGrasp: A Comprehensive Benchmark for Intent Understanding

    cs.CL 2026-05 unverdicted novelty 7.0

    IntentGrasp benchmark demonstrates that LLMs have low intent understanding capabilities, with most models underperforming random guessing on a challenging subset, but Intentional Fine-Tuning provides large improvements.

  52. Beyond Negative Rollouts: Positive-Only Policy Optimization with Implicit Negative Gradients

    cs.CL 2026-05 unverdicted novelty 7.0

    POPO uses bounded importance sampling on positive rollouts and a siamese policy network to achieve implicit negative gradients and stable optimization, matching or exceeding GRPO on math benchmarks such as 36.67% on A...

  53. Constraint Decay: The Fragility of LLM Agents in Backend Code Generation

    cs.SE 2026-05 unverdicted novelty 7.0

    LLM agents exhibit constraint decay with assertion pass rates dropping substantially as structural requirements increase in multi-file backend code generation across web frameworks.

  54. MANTRA: Synthesizing SMT-Validated Compliance Benchmarks for Tool-Using LLM Agents

    cs.CL 2026-05 unverdicted novelty 7.0

    MANTRA automatically synthesizes SMT-validated compliance benchmarks for LLM agents from natural language manuals and tool schemas, producing 285 tasks across 6 domains with minimal human effort.

  55. Post Reasoning: Improving the Performance of Non-Thinking Models at No Cost

    cs.AI 2026-05 conditional novelty 7.0

    Post-Reasoning boosts LLM accuracy by reversing the usual answer-after-reasoning order, delivering mean relative gains of 17.37% across 117 model-benchmark pairs with zero extra cost.

  56. PragLocker: Protecting Agent Intellectual Property in Untrusted Deployments via Non-Portable Prompts

    cs.CR 2026-05 unverdicted novelty 7.0

    PragLocker protects agent prompts as IP by building non-portable obfuscated versions that function only on the intended LLM through code-symbol semantic anchoring followed by target-model feedback noise injection.

  57. Selective Rollout: Mid-Trajectory Termination for Multi-Sample Agent RL

    cs.LG 2026-05 conditional novelty 7.0

    A one-parameter early-termination gate based on mean pairwise prefix edit distance reduces wall-clock time by 10.7% and raises held-out success by 2.5 pp in GRPO on ALFWorld by cutting zero-advantage batch dilution.

  58. An Empirical Study of Proactive Coding Assistants in Real-World Software Development

    cs.SE 2026-05 unverdicted novelty 7.0

    Real developer IDE traces differ substantially from LLM simulations in behavior and structure; current proactive assistants are unreliable on real traces, and simulated data cannot substitute for real data in training.

  59. KernelBenchX: A Comprehensive Benchmark for Evaluating LLM-Generated GPU Kernels

    cs.LG 2026-05 unverdicted novelty 7.0

    KernelBench-X benchmark shows task category predicts LLM kernel correctness better than method choice, iterative refinement trades performance for higher success rates, and correctness does not ensure efficiency gains...

  60. KernelBenchX: A Comprehensive Benchmark for Evaluating LLM-Generated GPU Kernels

    cs.LG 2026-05 conditional novelty 7.0

    KernelBenchX benchmark shows task category explains nearly three times more variance in LLM kernel correctness than method choice, iterative refinement boosts correctness but reduces performance, and quantization rema...