pith. machine review for the scientific record. sign in

arxiv: 2307.07924 · v5 · submitted 2023-07-16 · 💻 cs.SE · cs.CL· cs.MA

Recognition: 2 theorem links

· Lean Theorem

ChatDev: Communicative Agents for Software Development

Cheng Yang, Chen Qian, Dahai Li, Hongzhang Liu, Jiahao Li, Juyuan Xu, Maosong Sun, Nuo Chen, Wei Liu, Weize Chen, Xin Cong, Yufan Dang, Yusheng Su, Zhiyuan Liu

Pith reviewed 2026-05-12 20:28 UTC · model grok-4.3

classification 💻 cs.SE cs.CLcs.MA
keywords multi-agent systemslarge language modelssoftware developmentcommunicative agentschat chaindehallucinationcollaborative AI
0
0 comments X

The pith

Specialized LLM agents can develop software by communicating through a structured chat process guided to avoid errors.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces ChatDev, a framework that organizes large language model agents into roles for software development. These agents follow a chat chain to structure their discussions and use communicative dehallucination to maintain accuracy in exchanges. Through repeated conversations, they cover the design, coding, and testing stages, using natural language for planning and code for fixes. This shows that language serves as a common medium for AI agents to solve complex tasks together without separate tools for each step.

Core claim

ChatDev demonstrates that communicative agents driven by large language models can actively participate in design, coding, and testing by deriving solutions from multi-turn dialogues. The use of natural language supports system design while programming language communication aids debugging, unifying the process under language-based collaboration.

What carries the argument

The chat chain guides the content of agent communications while communicative dehallucination directs the manner of communication to prevent hallucinations, enabling coherent multi-agent collaboration on software tasks.

If this is right

  • Development phases become integrated through ongoing agent dialogues rather than sequential handoffs.
  • Natural language exchanges handle creative aspects like design, while code-based talks resolve technical issues.
  • The framework allows solutions to emerge directly from agent interactions without external models for each phase.
  • Language acts as the unifying mechanism for autonomous problem-solving by multiple LLM agents.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar chat-based coordination could apply to other collaborative tasks such as scientific discovery or project management.
  • Testing this on larger projects might reveal limits in handling very complex software systems.
  • Future systems could incorporate more agent roles to cover additional development activities like deployment.

Load-bearing premise

That the guided multi-turn dialogues between these LLM agents will produce functional and correct software without needing outside verification or corrections.

What would settle it

Generating applications with ChatDev for common programming problems and then running them through unit tests and manual inspection to see if they function as intended or contain errors.

read the original abstract

Software development is a complex task that necessitates cooperation among multiple members with diverse skills. Numerous studies used deep learning to improve specific phases in a waterfall model, such as design, coding, and testing. However, the deep learning model in each phase requires unique designs, leading to technical inconsistencies across various phases, which results in a fragmented and ineffective development process. In this paper, we introduce ChatDev, a chat-powered software development framework in which specialized agents driven by large language models (LLMs) are guided in what to communicate (via chat chain) and how to communicate (via communicative dehallucination). These agents actively contribute to the design, coding, and testing phases through unified language-based communication, with solutions derived from their multi-turn dialogues. We found their utilization of natural language is advantageous for system design, and communicating in programming language proves helpful in debugging. This paradigm demonstrates how linguistic communication facilitates multi-agent collaboration, establishing language as a unifying bridge for autonomous task-solving among LLM agents. The code and data are available at https://github.com/OpenBMB/ChatDev.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper introduces ChatDev, a multi-agent framework for software development in which specialized LLM agents collaborate across design, coding, and testing phases. Agents are guided by a chat chain (specifying what to communicate) and communicative dehallucination (specifying how to communicate), deriving solutions from multi-turn natural-language dialogues. The authors claim that natural language is advantageous for system design while programming-language communication aids debugging, positioning language as a unifying bridge for autonomous task-solving among LLM agents. Code and data are released at https://github.com/OpenBMB/ChatDev.

Significance. If the central effectiveness claims were supported by controlled experiments, the work would be significant for demonstrating how communicative multi-agent LLM systems can unify fragmented software-engineering phases without phase-specific model designs. The public release of code is a clear strength that supports reproducibility and follow-on research.

major comments (2)
  1. [Abstract and evaluation sections] The manuscript asserts qualitative advantages of the chat-chain and communicative-dehallucination mechanisms (abstract and §4) yet reports no quantitative success rates, bug counts, completion times, or error metrics, nor any ablation or baseline comparisons against single-LLM prompting or non-communicative multi-agent setups. This absence directly undermines the central claim that the proposed communication protocols causally improve outcomes.
  2. [Method / Communicative Dehallucination] The description of how 'communicative dehallucination' is implemented and how it differs from standard prompting or self-consistency techniques is insufficient to allow replication or to isolate its contribution (the method section provides only high-level prose).
minor comments (1)
  1. [Introduction / Related Work] The paper introduces two new terms ('chat chain' and 'communicative dehallucination') without a dedicated related-work subsection contrasting them to prior multi-agent LLM frameworks (e.g., AutoGen, MetaGPT).

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback, which helps us strengthen the presentation of ChatDev. We address the major comments point by point below and commit to revisions that improve clarity and evidence.

read point-by-point responses
  1. Referee: [Abstract and evaluation sections] The manuscript asserts qualitative advantages of the chat-chain and communicative-dehallucination mechanisms (abstract and §4) yet reports no quantitative success rates, bug counts, completion times, or error metrics, nor any ablation or baseline comparisons against single-LLM prompting or non-communicative multi-agent setups. This absence directly undermines the central claim that the proposed communication protocols causally improve outcomes.

    Authors: We agree that the current manuscript relies primarily on qualitative demonstrations through case studies and illustrative examples rather than controlled quantitative experiments. While the abstract and evaluation sections highlight observed advantages of natural language for design and programming language for debugging, we acknowledge the absence of success rates, bug counts, completion times, ablations, or baseline comparisons. This limits the strength of causal claims. In the revised manuscript, we will add a dedicated quantitative evaluation section reporting task completion rates on a set of software development tasks, comparisons to single-LLM prompting and non-communicative multi-agent baselines, and ablation studies that isolate the chat chain and communicative dehallucination components. revision: yes

  2. Referee: [Method / Communicative Dehallucination] The description of how 'communicative dehallucination' is implemented and how it differs from standard prompting or self-consistency techniques is insufficient to allow replication or to isolate its contribution (the method section provides only high-level prose).

    Authors: We thank the referee for identifying this gap in methodological detail. Communicative dehallucination guides agents to perform verification during multi-turn dialogues by cross-referencing prior messages and validating outputs through code execution in later phases. It differs from standard prompting by enforcing an explicit inter-agent verification protocol and from self-consistency by relying on communicative correction rather than independent sampling. In the revised manuscript, we will expand the method section with pseudocode, a step-by-step algorithmic description, and explicit differentiation from related techniques to enable replication and better isolation of its contribution. revision: yes

Circularity Check

0 steps flagged

No circularity: system description with empirical observations, no derivations or fitted predictions.

full rationale

The paper describes an implemented multi-agent framework (ChatDev) for software development using LLMs, with claims resting on observed behavior from running the system on example tasks. No mathematical derivations, equations, uniqueness theorems, or parameter-fitting steps are present. The central claims concern advantages of natural language communication and programming-language debugging, supported by qualitative examples and implementation details rather than any chain that reduces to self-referential inputs or self-citations. The reader's assessment of score 1.0 aligns with this; the work is self-contained as an engineering demonstration without load-bearing circular elements.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 2 invented entities

The central claim depends on two newly introduced mechanisms whose effectiveness is asserted without independent prior evidence, plus the domain assumption that current LLMs can sustain coherent multi-turn role-based collaboration.

axioms (1)
  • domain assumption Current LLMs can be prompted into stable specialized roles and can follow structured communication protocols across design, coding, and testing phases without additional fine-tuning.
    Invoked when the paper states that agents actively contribute through unified language-based communication.
invented entities (2)
  • chat chain no independent evidence
    purpose: to specify what each agent should communicate at each development stage
    New structuring device introduced to organize dialogues; no independent evidence supplied.
  • communicative dehallucination no independent evidence
    purpose: to steer agents toward reliable communication and reduce fabricated content
    New technique introduced to improve multi-agent reliability; no independent evidence supplied.

pith-pipeline@v0.9.0 · 5531 in / 1402 out tokens · 43976 ms · 2026-05-12T20:28:32.041494+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 29 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Why Do Multi-Agent LLM Systems Fail?

    cs.AI 2025-03 unverdicted novelty 8.0

    The authors create the first large-scale dataset and taxonomy of failure modes in multi-agent LLM systems to explain their limited performance gains.

  2. The Moltbook Files: A Harmless Slopocalypse or Humanity's Last Experiment

    cs.CL 2026-05 unverdicted novelty 7.0

    An AI-agent social platform generated mostly neutral content whose use in fine-tuning reduced model truthfulness comparably to human Reddit data, suggesting limited unique harm but flagging tail risks like secret leaks.

  3. Social Bias in LLM-Generated Code: Benchmark and Mitigation

    cs.SE 2026-05 unverdicted novelty 7.0

    LLMs show up to 60.58% social bias in generated code; a new Fairness Monitor Agent cuts bias by 65.1% and raises functional correctness from 75.80% to 83.97%.

  4. FineState-Bench: Benchmarking State-Conditioned Grounding for Fine-grained GUI State Setting

    cs.CV 2026-04 unverdicted novelty 7.0

    FineState-Bench and FineState-Metrics show LVLMs achieve only 22.8% average exact-state success in GUI interactions, with visual diagnostic hints improving results by up to 14.9 points.

  5. Theory Under Construction: Orchestrating Language Models for Research Software Where the Specification Evolves

    cs.SE 2026-04 unverdicted novelty 7.0

    Comet-H orchestrates LLMs via deficit-scoring prompt selection and half-life task tracking to co-evolve research software components, demonstrated by a static analysis tool reaching F1=0.768 versus a 0.364 baseline.

  6. PhysCodeBench: Benchmarking Physics-Aware Symbolic Simulation of 3D Scenes via Self-Corrective Multi-Agent Refinement

    cs.RO 2026-04 unverdicted novelty 7.0

    PhysCodeBench benchmark and SMRF multi-agent framework enable better AI generation of physically accurate 3D simulation code, boosting performance by 31 points over baselines.

  7. A Systematic Survey of Security Threats and Defenses in LLM-Based AI Agents: A Layered Attack Surface Framework

    cs.CR 2026-04 unverdicted novelty 7.0

    A new 7x4 taxonomy organizes agentic AI security threats by architectural layer and persistence timescale, revealing under-explored upper layers and missing defenses after surveying 116 papers.

  8. ClawNet: Human-Symbiotic Agent Network for Cross-User Autonomous Cooperation

    cs.AI 2026-04 unverdicted novelty 7.0

    ClawNet digitizes human collaborative relationships into a network of identity-governed AI agents that collaborate on behalf of their owners through a central orchestrator enforcing binding and verification.

  9. Detecting Multi-Agent Collusion Through Multi-Agent Interpretability

    cs.AI 2026-04 conditional novelty 7.0

    NARCBench and five activation-probing methods detect multi-agent collusion with 0.73-1.00 AUROC across distribution shifts and steganographic tasks by aggregating per-agent signals.

  10. CTM-AI: A Blueprint for General AI Inspired by a Model of Consciousness

    q-bio.NC 2026-04 unverdicted novelty 6.0

    CTM-AI combines a formal consciousness model with foundation models to report state-of-the-art results on sarcasm detection, humor, and agentic tool-use benchmarks.

  11. CreativeGame:Toward Mechanic-Aware Creative Game Generation

    cs.AI 2026-04 unverdicted novelty 6.0

    CreativeGame enables iterative HTML5 game generation via mechanic-guided planning, lineage memory, runtime validation, and programmatic rewards to produce inspectable version-to-version mechanic evolution.

  12. CoopGuard: Stateful Cooperative Agents Safeguarding LLMs Against Evolving Multi-Round Attacks

    cs.CR 2026-04 unverdicted novelty 6.0

    CoopGuard deploys cooperative agents to track conversation history and counter evolving multi-round attacks on LLMs, achieving a 78.9% reduction in attack success rate on a new 5,200-sample benchmark.

  13. ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate

    cs.CL 2023-08 conditional novelty 6.0

    Multi-agent debate among LLMs yields more reliable text evaluations than single-agent prompting by simulating collaborative human judgment.

  14. Reinforced Collaboration in Multi-Agent Flow Networks

    cs.LG 2026-05 unverdicted novelty 5.0

    MANGO optimizes multi-agent LLM workflows via flow networks, RL, and textual gradients, delivering up to 12.8% higher performance and 47.4% better efficiency while generalizing to new domains.

  15. ProfiliTable: Profiling-Driven Tabular Data Processing via Agentic Workflows

    cs.AI 2026-05 unverdicted novelty 5.0

    ProfiliTable is a profiling-driven multi-agent system that builds semantic context through exploration and closed-loop refinement to produce more reliable tabular data transformations than prior LLM approaches.

  16. Consistency as a Testable Property: Statistical Methods to Evaluate AI Agent Reliability

    cs.AI 2026-05 unverdicted novelty 5.0

    A framework with U-statistics and kernel-based metrics quantifies AI agent consistency and robustness, showing trajectory metrics outperform pass@1 rates in diagnosing failures.

  17. Swarm Skills: A Portable, Self-Evolving Multi-Agent System Specification for Coordination Engineering

    cs.CL 2026-05 unverdicted novelty 5.0

    Swarm Skills is a distributable specification for multi-agent workflows that includes roles, execution bounds, and a self-evolution algorithm to automatically improve coordination strategies.

  18. Agentic AI in the Software Development Lifecycle: Architecture, Empirical Evidence, and the Reshaping of Software Engineering

    cs.SE 2026-04 unverdicted novelty 5.0

    Agentic AI systems are shifting software engineering from line-level code generation to delegated repository-scale execution under supervision, with SWE-bench performance rising from 1.96% to 78.4% and productivity ga...

  19. ARMove: Learning to Predict Human Mobility through Agentic Reasoning

    cs.MA 2026-04 unverdicted novelty 5.0

    ARMove is a transferable framework for human mobility prediction that combines agentic LLM reasoning, feature management, and large-small model synergy to outperform baselines on several metrics while improving interp...

  20. Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate

    cs.CL 2023-05 conditional novelty 5.0

    Multi-agent debate with tit-for-tat arguments and a judge LLM improves reasoning by preventing LLMs from locking into incorrect initial solutions.

  21. AssemPlanner: A Multi-Agent Based Task Planning Framework for Flexible Assembly System

    cs.RO 2026-05 unverdicted novelty 4.0

    AssemPlanner is a ReAct-based multi-agent system that autonomously generates production plans from natural language inputs by integrating scheduling, knowledge, line balancing, and scene graph feedback.

  22. Terminus-4B: Can a Smaller Model Replace Frontier LLMs at Agentic Execution Tasks?

    cs.AI 2026-05 unverdicted novelty 4.0

    A fine-tuned 4B model matches or exceeds frontier LLMs in terminal execution subagent tasks for coding agents, reducing main agent token usage by 30% with no performance loss.

  23. Multi-Agent Systems: From Classical Paradigms to Large Foundation Model-Enabled Futures

    cs.AI 2026-04 unverdicted novelty 4.0

    A survey comparing classical multi-agent systems with large foundation model-enabled multi-agent systems, showing how the latter enables semantic-level collaboration and greater adaptability.

  24. An Empirical Study on Influence-Based Pretraining Data Selection for Code Large Language Models

    cs.SE 2026-04 unverdicted novelty 4.0

    Data-influence-score filtering using validation-set loss on downstream coding tasks improves Code-LLM performance, with the most beneficial training data varying significantly across different programming tasks.

  25. OpenKedge: Governing Agentic Mutation with Execution-Bound Safety and Evidence Chains

    cs.AI 2026-04 unverdicted novelty 4.0

    OpenKedge redefines AI agent state mutations as a governed process using intent proposals, policy-evaluated execution contracts, and cryptographic evidence chains to enable safe, auditable agentic behavior.

  26. Towards Trustworthy Report Generation: A Deep Research Agent with Progressive Confidence Estimation and Calibration

    cs.AI 2026-04 unverdicted novelty 4.0

    A deep research agent incorporates progressive confidence estimation and calibration to produce trustworthy reports with transparent confidence scores on claims.

  27. The Rise and Potential of Large Language Model Based Agents: A Survey

    cs.AI 2023-09 accept novelty 4.0

    The paper surveys the origins, frameworks, applications, and open challenges of AI agents built on large language models.

  28. Code Broker: A Multi-Agent System for Automated Code Quality Assessment

    cs.SE 2026-04 unverdicted novelty 3.0

    Code Broker deploys a five-agent hierarchy that combines LLM semantic analysis with static linting to generate actionable Python code quality reports.

  29. Fairness in Multi-Agent Systems for Software Engineering: An SDLC-Oriented Rapid Review

    cs.SE 2026-04 unverdicted novelty 2.0

    A rapid review of fairness in LLM-enabled multi-agent systems for the software development lifecycle concludes that the field lacks standardized evaluations, broad coverage, and effective governance, leaving it unprep...