Code-to-Sequence (code2seq) Neural Network
Jump to navigation
Jump to search
A Code-to-Sequence (code2seq) Neural Network is an Encoder-Decoder Neural Network that can be trained to solve automatic code summarization tasks.
- AKA: code2seq
- Context:
- It can be trained by a Neural Code-to-Sequence Training System that implements a Code-to-Sequence Training Algorithm to solve a Code-to-Sequence Training Task.
- Example(s):
- ...
- Counter-Example(s):
- See: Abstract Syntax Tree, Automatic Code Documentation Task, Automatic Code Retrieval Task, Automatic Code Generation Task, Automatic Code Captioning Task, Encoder-Decoder Neural Network, Neural Machine Translation (NMT) Algorithm.
References
2018
- (Alon et al., 2018) ⇒ Uri Alon, Shaked Brody, Omer Levy, and Eran Yahav. (2018). “code2seq: Generating Sequences from Structured Representations of Code.” In: Proceedings of the 7th International Conference on Learning Representations (ICLR 2019).
- QUOTE: Our model follows the standard encoder-decoder architecture for NMT (Section 3.1), with the significant difference that the encoder does not read the input as a flat sequence of tokens. Instead, the encoder creates a vector representation for each AST path separately (Section 3.2). The decoder then attends over the encoded AST paths (rather than the encoded tokens) while generating the target sequence. Our model is illustrated in Figure 3.
![]() |