pith. machine review for the scientific record. sign in

arxiv: 2604.07341 · v1 · submitted 2026-04-08 · 💻 cs.SE · cs.LG

Recognition: unknown

ReCodeAgent: A Multi-Agent Workflow for Language-agnostic Translation and Validation of Large-scale Repositories

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:23 UTC · model grok-4.3

classification 💻 cs.SE cs.LG
keywords repository-level code translationmulti-agent systemslanguage-agnostic translationsoftware migrationautonomous agentscode validationtest pass rate
0
0 comments X

The pith

ReCodeAgent is the first multi-agent system to deliver high-success-rate, language-agnostic translation and validation for large code repositories.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

ReCodeAgent is an autonomous multi-agent system for translating and validating entire code repositories from one programming language to another. It requires users to supply only the source project and the target language, eliminating the need for language-pair-specific engineering or ongoing human oversight. The approach was tested on 118 real-world projects spanning six languages and four translation pairs, where it outperformed four alternative neuro-symbolic and agentic methods. ReCodeAgent improved test pass rates by 60.8 percent on ground-truth tests at an average cost of 15.3 dollars per project. Switching to a single-agent design caused test pass rates to fall by 40.4 percent and made trajectories 28 percent longer.

Core claim

ReCodeAgent is an autonomous multi-agent approach for language-agnostic repository-level code translation and validation. Users only need to provide the project in the source PL and specify the target PL for ReCodeAgent to automatically translate and validate the entire repository. ReCodeAgent is the first technique to achieve high translation success rates across many PLs.

What carries the argument

Multi-agent workflow that synthesizes code across programming languages and autonomously invokes each language's existing analysis and validation tools.

If this is right

  • Translation and validation succeed across six languages and four pairs without per-pair custom engineering.
  • Test pass rates on ground-truth tests improve by 60.8 percent over four prior techniques.
  • Average cost stays at 15.3 dollars per project of roughly 2,000 lines.
  • Multi-agent design raises test pass rate by 40.4 percent and shortens trajectories by 28 percent relative to single-agent versions.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Large legacy codebases could be migrated to newer languages with far less manual porting effort.
  • The same autonomous workflow pattern may apply to other repository-scale tasks such as refactoring or security hardening.
  • Success on the tested language set suggests the method could scale to additional languages if the agentic tool-use remains reliable.

Load-bearing premise

That providing only the source project and target PL is sufficient for fully autonomous, high-success-rate translation and validation of large-scale repositories without language-pair-specific engineering or human oversight.

What would settle it

Running ReCodeAgent on repositories written in a programming language outside the six evaluated ones and observing whether high test pass rates are maintained without adding custom tools or human intervention.

Figures

Figures reproduced from arXiv: 2604.07341 by Ali Reza Ibrahimzada, Brandon Paulsen, Daniel Kroening, Reyhaneh Jabbarvand.

Figure 1
Figure 1. Figure 1: Overview of ReCodeAgent. language interoperability may not exist for arbitrary PL pairs, a PL￾agnostic approach may operate on test translation (of existing tests) and additional test generation. Code generation and validation, in general, are two conflicting objectives that should not be performed by one agent [16, 25, 29, 38, 43, 57]; Otherwise, the agent may mod￾ify the test rather than incorrect code t… view at source ↗
Figure 2
Figure 2. Figure 2: Hover feature on a Python code in an IDE. [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 4
Figure 4. Figure 4: Project analysis (PA) tools output. tool is helpful in making consistent changes across the codebase, without the need to perform additional edits. 3.1.2 Project Analysis (PA) Tools: These tools extract structural information from the codebase, aiding agents in project compre￾hension and planning. The goal of these tools is to reduce the token consumption of the agent, which would otherwise be spent on ex￾… view at source ↗
Figure 5
Figure 5. Figure 5: Documents generated by Analyzer Agent in ReCodeAgent. ======== Fragment Extraction ======= ## checkdigit.go checkdigit.go:isNumber checkdigit.go:NewLuhn checkdigit.go:NewDamm checkdigit.go:NewUPC ## damm.go =========== Name Mapping =========== ## functions: go.isNumber: rs.isNumber go.NewLuhn: rs.NewLuhn go.NewDamm: rs.NewDamm go.NewUPC: rs.NewUPC ## variables: go. ... ======== Skeleton Generation ======= … view at source ↗
Figure 7
Figure 7. Figure 7: Impact of different agents on translation effectiveness and agent trajectories. RA: [PITH_FULL_IMAGE:figures/full_fig_p009_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Cost and Tool Usage Analysis of ReCodeAgent. 4.5.1 Cost. Project costs and token usage scale with complexity, ranging from AlphaTrans (2.5M input/1.4M output tokens at $76) and Oxidizer (1.1M input/0.4M output at $25) to the more eco￾nomical Skel (0.6M input/0.3M output at $20) and Crust (0.3M input/0.2M output at $11). Execution time scales linearly with project size: AlphaTrans averages 258 minutes per p… view at source ↗
read the original abstract

Most repository-level code translation and validation techniques have been evaluated on a single source-target programming language (PL) pair, owing to the complex engineering effort required to adapt new PL pairs. Programming agents can enable PL-agnosticism in repository-level code translation and validation: they can synthesize code across many PLs and autonomously use existing tools specific to each PL's analysis. However, state-of-the-art has yet to offer a fully autonomous agentic approach for repository-level code translation and validation of large-scale programs. This paper proposes ReCodeAgent, an autonomous multi-agent approach for language-agnostic repository-level code translation and validation. Users only need to provide the project in the source PL and specify the target PL for ReCodeAgent to automatically translate and validate the entire repository. ReCodeAgent is the first technique to achieve high translation success rates across many PLs. We compare the effectiveness of ReCodeAgent with four alternative neuro-symbolic and agentic approaches to translate 118 real-world projects, with 1,975 LoC and 43 translation units for each project, on average. The projects cover 6 PLs (C, Go, Java, JavaScript, Python, and Rust) and 4 PL pairs (C-Rust, Go-Rust, Java-Python, Python-JavaScript). Our results demonstrate that ReCodeAgent consistently outperforms prior techniques on translation correctness, improving test pass rate by 60.8% on ground-truth tests, with an average cost of $15.3. We also perform process-centric analysis of ReCodeAgent trajectories to confirm its procedural efficiency. Finally, we investigate how the design choices (a multi-agent vs. single-agent architecture) influence ReCodeAgent performance: on average, the test pass rate drops by 40.4%, and trajectories become 28% longer and persistently inefficient.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes ReCodeAgent, a multi-agent workflow designed to enable language-agnostic translation and validation of large-scale code repositories. The approach requires only the source project and the target programming language as input, aiming to autonomously handle translation across multiple programming languages without pair-specific engineering. The evaluation involves translating 118 real-world projects (average 1,975 LoC, 43 translation units) across 6 PLs and 4 pairs, comparing against four neuro-symbolic and agentic baselines. Key results include a 60.8% improvement in test pass rate on ground-truth tests and an average cost of $15.3, along with analysis showing multi-agent design improves performance by 40.4% over single-agent.

Significance. If the results hold under rigorous scrutiny, this work represents a meaningful advance in repository-level code translation by demonstrating a scalable, agent-based method that reduces reliance on language-pair-specific adaptations. The inclusion of process-centric trajectory analysis and cost metrics strengthens the practical implications for software maintenance and migration tasks. The multi-agent vs. single-agent comparison provides useful insights into agentic system design.

major comments (2)
  1. [Evaluation] The central claim that ReCodeAgent is language-agnostic and fully autonomous relies on experiments limited to four language pairs (C-Rust, Go-Rust, Java-Python, Python-JavaScript). No results are provided for additional pairs or for demonstrating that the workflow succeeds on unseen pairs without any pair-specific configuration or human intervention in environment setup (e.g., compilers, dependency resolution). This is load-bearing for the 'across many PLs' assertion in the abstract.
  2. [Abstract and Evaluation] The reported 60.8% improvement in test pass rate lacks accompanying details on experimental protocol, such as how ground-truth tests were obtained, project selection criteria, number of runs, or statistical tests for significance. Without these, it is difficult to assess the reliability of the performance claims.
minor comments (2)
  1. [Abstract] The abstract mentions 'high translation success rates across many PLs' but the evaluation is on four pairs; consider qualifying this in the abstract for accuracy.
  2. [Throughout] Ensure consistent use of terminology for 'translation units' and clarify how they are defined in the methodology section.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their insightful review and the recommendation for major revision. We have carefully considered the comments and provide point-by-point responses below, along with our plans for revisions to address the concerns.

read point-by-point responses
  1. Referee: [Evaluation] The central claim that ReCodeAgent is language-agnostic and fully autonomous relies on experiments limited to four language pairs (C-Rust, Go-Rust, Java-Python, Python-JavaScript). No results are provided for additional pairs or for demonstrating that the workflow succeeds on unseen pairs without any pair-specific configuration or human intervention in environment setup (e.g., compilers, dependency resolution). This is load-bearing for the 'across many PLs' assertion in the abstract.

    Authors: We agree that expanding the evaluation to more language pairs would provide stronger evidence for the language-agnostic claim. However, the current experiments already demonstrate the approach across four diverse pairs involving six languages, with no pair-specific configurations or manual interventions in the workflow—the agents autonomously manage tool usage and environment setup for each target language. The design is intentionally general, as described in Section 3. To further address this, we will revise the abstract to more precisely state the scope of our evaluation (multiple pairs across six PLs) and add a discussion on the generalizability of the multi-agent workflow to unseen pairs based on its architecture. We cannot add new experimental results for additional pairs at this stage without significant additional resources, but the existing results support the autonomy claim. revision: partial

  2. Referee: [Abstract and Evaluation] The reported 60.8% improvement in test pass rate lacks accompanying details on experimental protocol, such as how ground-truth tests were obtained, project selection criteria, number of runs, or statistical tests for significance. Without these, it is difficult to assess the reliability of the performance claims.

    Authors: We acknowledge the need for greater transparency in the experimental protocol. In the revised version of the manuscript, we will expand the Evaluation section (Section 4) to include: detailed information on how ground-truth tests were sourced from the original project repositories; the criteria used for selecting the 118 real-world projects (e.g., popularity, presence of comprehensive test suites, and diversity across languages); the number of experimental runs performed (we conducted multiple runs to mitigate stochasticity in agent behavior); and results of statistical significance tests (such as paired t-tests or Wilcoxon tests) to support the 60.8% improvement claim. These additions will allow readers to better evaluate the reliability of our findings. revision: yes

Circularity Check

0 steps flagged

No circularity; empirical results rest on external benchmarks and direct comparisons.

full rationale

The paper is an empirical evaluation of a multi-agent system for code translation. It reports measured improvements (e.g., 60.8% higher test pass rate) from running ReCodeAgent and four baselines on 118 real-world projects spanning four language pairs. No equations, derivations, fitted parameters, or first-principles predictions appear in the provided text. Claims of language-agnostic behavior and outperformance are grounded in the experimental outcomes rather than any self-definitional loop, renamed known result, or load-bearing self-citation chain. The evaluation design is self-contained against the stated benchmarks and does not reduce any central result to its own inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

Based solely on the abstract; no explicit free parameters, axioms, or invented entities are detailed beyond the high-level claim that agents can synthesize code and autonomously invoke PL-specific tools.

axioms (1)
  • domain assumption Repository-level code translation and validation can be performed autonomously by agents that synthesize code and use existing PL-specific analysis tools without language-pair-specific engineering.
    This premise underpins the entire language-agnostic claim in the abstract.
invented entities (1)
  • ReCodeAgent multi-agent workflow no independent evidence
    purpose: To enable fully autonomous, language-agnostic repository translation and validation
    The system itself is the primary contribution introduced in the abstract.

pith-pipeline@v0.9.0 · 5658 in / 1306 out tokens · 61057 ms · 2026-05-10T17:23:59.340519+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

102 extracted references · 37 canonical work pages · 4 internal anchors

  1. [1]

    Muhammad Salman Abid, Mrigank Pawagi, Sugam Adhikari, Xuyan Cheng, Ryed Badr, Md Wahiduzzaman, Vedant Rathi, Ronghui Qi, Choiyin Li, Lu Liu, et al. 2024. GlueTest: Testing Code Translation via Language Interoperability. In2024 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, 612–617

  2. [2]

    Lakshya A Agrawal, Aditya Kanade, Navin Goyal, Shuvendu Lahiri, and Sriram Rajamani. 2023. Monitor-guided decoding of code lms with static analysis of repository context. InAdvances in Neural Information Processing Systems, Vol. 36. 32270–32298. https://neurips.cc/media/neurips-2023/Slides/70362.pdf

  3. [3]

    The Algorithms. 2026. All Algorithms implemented in Python. https://github.com/TheAlgorithms/Python/blob/master/data_structures/ binary_tree/binary_search_tree_recursive.py

  4. [4]

    The Algorithms. 2026. All Algorithms implemented in Python. https://github.com/TheAlgorithms/Python/blob/master/data_structures/ binary_tree/red_black_tree.py

  5. [5]

    David Belicza. 2026. TextRank on Go. https://github.com/DavidBelicza/ TextRank

  6. [6]

    The SWE bench Team. 2026. SWE-bench Leaderboard. https://www.swebench. com/

  7. [7]

    Hugo Bollon. 2026. Go-edlib : Edit distance and string comparison library. https://github.com/hbollon/go-edlib

  8. [8]

    Islem Bouzenia, Premkumar Devanbu, and Michael Pradel. 2024. Repaira- gent: An autonomous, llm-based agent for program repair.arXiv preprint arXiv:2403.17134(2024)

  9. [9]

    Xuemeng Cai, Jiakun Liu, Xiping Huang, Yijun Yu, Haitao Wu, Chunmiao Li, Bo Wang, Imam Nur Bani Yusuf, and Lingxiao Jiang. 2025. Rustmap: Towards project-scale c-to-rust migration via program analysis and LLM. InInternational Conference on Engineering of Complex Computer Systems. Springer, 283–302

  10. [10]

    Kevin Chen, Marco Cusumano-Towner, Brody Huval, Aleksei Petrenko, Jackson Hamburger, Vladlen Koltun, and Philipp Krähenbühl. 2025. Reinforcement learning for long-horizon interactive llm agents.arXiv preprint arXiv:2502.01600 (2025)

  11. [11]

    Xinyun Chen, Chang Liu, and Dawn Song. 2018. Tree-to-tree neural networks for program translation.Advances in neural information processing systems31 (2018)

  12. [12]

    Jimenez, John Yang, Leyton Ho, Tejal Patwardhan, Kevin Liu, and Aleksander Madry

    Neil Chowdhury, James Aung, Chan Jun Shern, Oliver Jaffe, Dane Sherburn, Giulio Starace, Evan Mays, Rachel Dias, Marwan Aljubeh, Mia Glaese, Carlos E. Jimenez, John Yang, Leyton Ho, Tejal Patwardhan, Kevin Liu, and Aleksander Madry. 2024. Introducing SWE-bench Verified. https://openai.com/index/ introducing-swe-bench-verified/

  13. [13]

    Vivid Cortex. 2026. gohistogram - Histograms in Go. https://github.com/ VividCortex/gohistogram

  14. [14]

    Saman Dehghan, Tianran Sun, Tianxiang Wu, Zihan Li, and Reyhaneh Jabbar- vand. 2025. Translating Large-Scale C Repositories to Idiomatic Rust.arXiv preprint arXiv:2511.20617(2025)

  15. [15]

    Peng Di, Jianguo Li, Hang Yu, Wei Jiang, Wenting Cai, Yang Cao, Chaoyu Chen, Dajun Chen, Hongwei Chen, Liang Chen, et al . 2024. Codefuse-13b: A pretrained multi-lingual code large language model. InProceedings of the 46th International Conference on Software Engineering: Software Engineering in Practice. 418–429

  16. [16]

    Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. 2024. Self-collaboration code gener- ation via chatgpt.ACM Transactions on Software Engineering and Methodology 33, 7 (2024), 1–38

  17. [17]

    Lutfi Eren Erdogan, Nicholas Lee, Sehoon Kim, Suhong Moon, Hiroki Furuta, Gopala Anumanchipalli, Kurt Keutzer, and Amir Gholami. 2025. Plan-and- act: Improving planning of agents for long-horizon tasks.arXiv preprint arXiv:2503.09572(2025)

  18. [18]

    Montana Flynn. 2026. Stats - Golang Statistics Package. https://github.com/ montanaflynn/stats

  19. [19]

    The Apache Software Foundation. 2026. Apache Commons CLI. https://github. com/apache/commons-cli

  20. [20]

    The Apache Software Foundation. 2026. Apache Commons CSV. https://github. com/apache/commons-csv

  21. [21]

    The Apache Software Foundation. 2026. Apache Commons FileUpload. https: //github.com/apache/commons-fileupload

  22. [22]

    The Apache Software Foundation. 2026. Apache Commons Validator. https: //github.com/apache/commons-validator

  23. [23]

    GitHub. 2026. CodeQL. https://codeql.github.com

  24. [24]

    Ziqi Guan, Xin Yin, Zhiyuan Peng, and Chao Ni. 2025. Repotransagent: Multi- agent llm framework for repository-aware code translation.arXiv preprint arXiv:2508.17720(2025)

  25. [25]

    Dong Huang, Jie M Zhang, Michael Luck, Qingwen Bu, Yuhao Qing, and Heming Cui. 2023. Agentcoder: Multi-agent-based code generation with iterative testing and optimisation.arXiv preprint arXiv:2312.13010(2023)

  26. [26]

    Ali Reza Ibrahimzada, Kaiyao Ke, Mrigank Pawagi, Muhammad Salman Abid, Rangeet Pan, Saurabh Sinha, and Reyhaneh Jabbarvand. 2025. AlphaTrans: A Neuro-Symbolic Compositional Approach for Repository-Level Code Transla- tion and Validation.Proc. ACM Softw. Eng.2, FSE, Article FSE109 (June 2025), 23 pages. doi:10.1145/3729379

  27. [27]

    Immunant. 2024. C2Rust Transpiler. https://github.com/immunant/c2rust

  28. [28]

    Paul Irwin. 2026. Java to CSharp Converter. https://github.com/paulirwin/ JavaToCSharp

  29. [29]

    Ashraful Islam, Mohammed Eunus Ali, and Md

    Md. Ashraful Islam, Mohammed Eunus Ali, and Md Rizwan Parvez. 2024. Map- Coder: Multi-Agent Code Generation for Competitive Problem Solving. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, Ban...

  30. [30]

    Suman Jain and Inderveer Chana. 2015. Modernization of legacy systems: A generalised roadmap. InProceedings of the Sixth International Conference on Computer and Communication Technology 2015. 62–67

  31. [31]

    Pooyan Jamshidi, Aakash Ahmad, and Claus Pahl. 2013. Cloud migration research: a systematic review.IEEE transactions on cloud computing1, 2 (2013), 142–157

  32. [32]

    Mingsheng Jiao, Tingrui Yu, Xuan Li, Guanjie Qiu, Xiaodong Gu, and Beijun Shen. 2023. On the evaluation of neural code translation: Taxonomy and benchmark. In2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 1529–1541

  33. [33]

    Carlos E Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik R Narasimhan. 2024. SWE-bench: Can Language Models Resolve Real-world Github Issues?. InThe Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=VTF8yNQM66

  34. [34]

    Kaiyao Ke, Ali Reza Ibrahimzada, Rangeet Pan, Saurabh Sinha, and Reyhaneh Jabbarvand. 2025. Advancing Automated In-Isolation Validation in Repository- Level Code Translation.arXiv preprint arXiv:2511.21878(2025)

  35. [35]

    Ravi Khadka, Belfrit V Batlajery, Amir M Saeidi, Slinger Jansen, and Jurri- aan Hage. 2014. How do professionals perceive legacy systems and software modernization?. InProceedings of the 36th International Conference on Software Engineering. 36–47

  36. [36]

    Anirudh Khatry, Robert Zhang, Jia Pan, Ziteng Wang, Qiaochu Chen, Greg Durrett, and Isil Dillig. 2025. CRUST-Bench: A Comprehensive Benchmark for C-to-safe-Rust Transpilation.arXiv preprint arXiv:2504.15254(2025)

  37. [37]

    Tianyu Li, Ruishi Li, Bo Wang, Brandon Paulsen, Umang Mathur, and Prateek Saxena. 2025. Adversarial Agent Collaboration for C to Rust Translation.arXiv preprint arXiv:2510.03879(2025)

  38. [38]

    Zi Lin, Sheng Shen, Jingbo Shang, Jason Weston, and Yixin Nie. 2025. Learning to solve and verify: A self-play framework for code and test generation.arXiv preprint arXiv:2502.14948(2025)

  39. [39]

    Junwei Liu, Kaixin Wang, Yixuan Chen, Xin Peng, Zhenpeng Chen, Lingming Zhang, and Yiling Lou. 2024. Large language model-based agents for software engineering: A survey.arXiv preprint arXiv:2409.02977(2024)

  40. [40]

    Shuyang Liu, Yang Chen, Rahul Krishna, Saurabh Sinha, Jatin Ganhotra, and Reyhan Jabbarvand. 2025. Process-Centric Analysis of Agentic Software Sys- tems.arXiv preprint arXiv:2512.02393(2025)

  41. [41]

    Feng Luo, Kexing Ji, Cuiyun Gao, Shuzheng Gao, Jia Feng, Kui Liu, Xin Xia, and Michael R Lyu. 2025. Integrating Rules and Semantics for LLM-Based C-to-Rust Translation.arXiv preprint arXiv:2508.06926(2025)

  42. [42]

    ZhouYang Luo. 2026. A library implementing different string similarity and distance measures using Python. https://github.com/luozhouyang/python- string-similarity/tree/master/strsimpy

  43. [43]

    Nat McAleese, Rai Michael Pokorny, Juan Felipe Ceron Uribe, Evgenia Nitishin- skaya, Maja Trebacz, and Jan Leike. 2024. LLM Critics Help Catch LLM Bugs. arXiv preprint arXiv:2407.00215(2024)

  44. [44]

    Microsoft. 2026. Language Server Implementations. https://microsoft.github. io/language-server-protocol/implementors/servers/

  45. [45]

    Anh Tuan Nguyen, Tung Thanh Nguyen, and Tien N Nguyen. 2013. Lexical statistical machine translation for language migration. InProceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering. 651–654

  46. [46]

    Anh Tuan Nguyen, Tung Thanh Nguyen, and Tien N Nguyen. 2014. Migrating code with statistical machine translation. InCompanion Proceedings of the 36th International Conference on Software Engineering. 544–547

  47. [47]

    Anh Tuan Nguyen, Tung Thanh Nguyen, and Tien N Nguyen. 2015. Divide- and-conquer approach for multi-phase statistical migration for source code (t). In2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 585–596

  48. [48]

    Wasif Nisar. 2022. Modernization framework to enhance the security of legacy information systems.Intelligent Automation & Soft Computing(2022)

  49. [49]

    Vikram Nitin, Rahul Krishna, and Baishakhi Ray. 2024. Spectra: Enhancing the code translation ability of language models by generating multi-modal specifications.arXiv preprint arXiv:2405.18574(2024)

  50. [50]

    Vikram Nitin, Rahul Krishna, Luiz Lemos do Valle, and Baishakhi Ray. 2025. C2saferrust: Transforming c projects into safer rust with neurosymbolic tech- niques.arXiv preprint arXiv:2501.14257(2025)

  51. [51]

    Oracle. 2026. GraalVM. https://www.graalvm.org. Ali Reza Ibrahimzada, Brandon Paulsen, Daniel Kroening, and Reyhaneh Jabbarvand

  52. [52]

    Siru Ouyang, Wenhao Yu, Kaixin Ma, Zilin Xiao, Zhihan Zhang, Mengzhao Jia, Jiawei Han, Hongming Zhang, and Dong Yu. 2025. RepoGraph: Enhancing AI Software Engineering with Repository-level Code Graph. InThe Thirteenth International Conference on Learning Representations. https://openreview.net/ forum?id=dw9VUsSHGB

  53. [53]

    Rangeet Pan, Ali Reza Ibrahimzada, Rahul Krishna, Divya Sankar, Lam- bert Pouguem Wassi, Michele Merler, Boris Sobolev, Raju Pavuluri, Saurabh Sinha, and Reyhaneh Jabbarvand. 2024. Lost in translation: A study of bugs introduced by large language models while translating code. InProceedings of the IEEE/ACM 46th International Conference on Software Enginee...

  54. [54]

    Will Pearson. 2026. Python lib for TOML. https://github.com/uiri/toml/tree/ master/toml

  55. [55]

    James Polera. 2026. gonameparts. https://github.com/polera/gonameparts

  56. [56]

    Mono Project. 2026. Sharpen - Automated Java->C# coversion. https://github. com/mono/sharpen

  57. [57]

    Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, and others. 2024. ChatDev: Commu- nicative Agents for Software Development. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 15174–15186

  58. [58]

    ReCodeAgent. 2026. Artifact Website. https://doi.org/10.5281/zenodo.19337799

  59. [59]

    Baptiste Roziere, Marie-Anne Lachaux, Lowik Chanussot, and Guillaume Lam- ple. 2020. Unsupervised translation of programming languages.Advances in neural information processing systems33 (2020), 20601–20611

  60. [60]

    Baptiste Roziere, Jie M Zhang, Francois Charton, Mark Harman, Gabriel Syn- naeve, and Guillaume Lample. 2021. Leveraging automated unit tests for unsu- pervised code translation.arXiv preprint arXiv:2110.06773(2021)

  61. [61]

    Haifeng Ruan, Yuntong Zhang, and Abhik Roychoudhury. 2025. SpecRover: Code Intent Extraction via LLMs. In2025 IEEE/ACM 47th International Confer- ence on Software Engineering (ICSE). 963–974. doi:10.1109/ICSE55347.2025.00080

  62. [62]

    Manish Shetty, Naman Jain, Adwait Godbole, Sanjit A Seshia, and Koushik Sen

  63. [63]

    Syzygy: Dual Code-Test C to (safe) Rust Translation using LLMs and Dynamic Analysis.arXiv preprint arXiv:2412.14234(2024)

  64. [64]

    HoHyun Sim, Hyeonjoong Cho, Yeonghyeon Go, Zhoulai Fu, Ali Shokri, and Binoy Ravindran. 2025. Large Language Model-Powered Agent for C to Rust Code Translation.arXiv preprint arXiv:2505.15858(2025)

  65. [65]

    Weiwei Sun, Miao Lu, Zhan Ling, Kang Liu, Xuesong Yao, Yiming Yang, and Jiecao Chen. 2025. Scaling long-horizon llm agent via context-folding.arXiv preprint arXiv:2510.11967(2025)

  66. [66]

    The Claude Code Team. 2026. Claude Code. https://github.com/anthropics/ claude-code

  67. [67]

    The Eclipse Team. 2026. Eclipse JDT Language Server. https://github.com/ eclipse-jdtls/eclipse.jdt.ls

  68. [68]

    The Go Team. 2026. Gopls: The language server for Go. https://go.dev/gopls/

  69. [69]

    The LLVM Team. 2026. clangd. https://github.com/clangd/clangd

  70. [70]

    The Python Team. 2026. Conversion functions between RGB and other color systems. https://github.com/python/cpython/blob/3.13/Lib/colorsys.py

  71. [71]

    The Python Team. 2026. Heap queue algorithm (a.k.a. priority queue). https: //github.com/python/cpython/blob/3.13/Lib/heapq.py

  72. [72]

    The Python Team. 2026. A parser for HTML and XHTML. https://github.com/ python/cpython/blob/3.13/Lib/html/parser.py

  73. [73]

    The Qwen Team. 2026. Qwen Embedding. https://huggingface.co/Qwen/ Qwen3-Embedding-0.6B

  74. [74]

    The Rust Language Team. 2026. Rust Analyzer. https://rust-analyzer.github.io/

  75. [75]

    The Spyder IDE Team. 2026. Python LSP Server. https://github.com/python- lsp/python-lsp-server

  76. [76]

    The TypeScript Language Server Team. 2026. TypeScript Language Server. https://github.com/typescript-language-server/typescript-language-server

  77. [77]

    Sindhu Tipirneni, Ming Zhu, and Chandan K Reddy. 2024. Structcoder: Structure- aware transformer for code generation.ACM Transactions on Knowledge Dis- covery from Data18, 3 (2024), 1–20

  78. [78]

    Osamu Tonomori. 2026. Checkdigit. https://github.com/osamingo/checkdigit

  79. [79]

    Go Transpile. 2024. C to Go Translator. https://github.com/gotranspile/cxgo

  80. [80]

    Tree-Sitter. 2026. Tree-Sitter Library. https://tree-sitter.github.io/tree-sitter/

Showing first 80 references.