pith. machine review for the scientific record. sign in

arxiv: 1808.01400 · v6 · submitted 2018-08-04 · 💻 cs.LG · cs.PL· stat.ML

Recognition: unknown

code2seq: Generating Sequences from Structured Representations of Code

Authors on Pith no claims yet
classification 💻 cs.LG cs.PLstat.ML
keywords codecode2seqmodelslanguagesmodelprogrammingsourceapproach
0
0 comments X
read the original abstract

The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval. Sequence-to-sequence (seq2seq) models, adopted from neural machine translation (NMT), have achieved state-of-the-art performance on these tasks by treating source code as a sequence of tokens. We present ${\rm {\scriptsize CODE2SEQ}}$: an alternative approach that leverages the syntactic structure of programming languages to better encode source code. Our model represents a code snippet as the set of compositional paths in its abstract syntax tree (AST) and uses attention to select the relevant paths while decoding. We demonstrate the effectiveness of our approach for two tasks, two programming languages, and four datasets of up to $16$M examples. Our model significantly outperforms previous models that were specifically designed for programming languages, as well as state-of-the-art NMT models. An interactive online demo of our model is available at http://code2seq.org. Our code, data and trained models are available at http://github.com/tech-srl/code2seq.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. CodeBLEU: a Method for Automatic Evaluation of Code Synthesis

    cs.SE 2020-09 conditional novelty 7.0

    CodeBLEU improves correlation with human programmer scores on code synthesis tasks by adding syntactic AST matching and semantic data-flow matching to the standard BLEU n-gram approach.

  2. GraphCodeBERT: Pre-training Code Representations with Data Flow

    cs.SE 2020-09 accept novelty 7.0

    GraphCodeBERT uses data flow graphs in pre-training to capture semantic code structure and reaches state-of-the-art results on code search, clone detection, translation, and refinement.

  3. CodeSearchNet Challenge: Evaluating the State of Semantic Code Search

    cs.LG 2019-09 accept novelty 7.0

    Releases a large multi-language code corpus and expert-annotated challenge to benchmark semantic code search.

  4. LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code

    cs.SE 2024-03 unverdicted novelty 6.0

    LiveCodeBench collects 400 recent contest problems to create a contamination-free benchmark evaluating LLMs on code generation and related capabilities like self-repair and execution.