Bahdanau-Cho-Bengio Neural Machine Translation Task
(Redirected from Bahdanau-Cho-Bengio Neural Machine Translation and Word Aligment Task)
Jump to navigation
Jump to search
A Bahdanau-Cho-Bengio Neural Machine Translation Task is a Neural Machine Translation Task that generates translated text items by training an RNN Encoder-Decoder Neural Network with alignment mechanism.
- AKA: Neural Machine Translation via Alignment Mechanism, Bahdanau-Cho-Bengio RNNsearch Training Task.
- Context:
- Task Input: text items written in natural language A.
- Task Output: translated text item in natural language B.
- Task Requirement(s):
- Benchmark Datasets:
- Training data: (1) a 348M-word corpus resulting from combination of several WMT 14 English-French parallel corpora: Europarl+ new commentary +2 crawled corpora; (2) a list of 30,000 most frequent words for post-tokenization training.
- Development/validation Dataset: WMT news-test-2012 + news-test-2013 concatenated datasets.
- Test dataset: WMT new-test-2014 dataset.
- Benchmark Datasets:
- Benchmark Performance Metrics:
- It can be solved by a Bahdanau-Cho-Bengio Neural Machine Translation System.
- It also compared the translated text items generated by the two baseline models to those generated by the MOSES phrase-based translation system (Koehn et al., 2007).
- Example(s):
- Bahdanau et al. (2015)'s BLEU Scores of the generated translations on the test set with respect to the lengths of the sentences:
- Bahdanau et al. (2015)'s BLEU scores of the trained models computed on the test set:
{|class="wikitable" style="border:1px; teext-align:center; solid black; border-spacing:1px; margin: 1em auto; width: 80%"
|- !Model!!All!!No UNK$\boldsymbol{\circ}$ |- |RNNencdec-30||13.93||24.19 |- |RNNsearch-30||21.50||31.44 |- |RNNencdec-50||17.82||26.71 |- |RNNsearch-50||26.75||34.16 |- |RNNsearch-50$\star$||28.45||36.15 |- |Moses||33.30||35.63 |- |}
- Counter-Example(s):
- See: Machine Translation Task, RNN Encoder-Decoder Network, Attention Mechanism, Attention-Encoder-Decoder Neural Network.
References
2015
- (Bahdanau et al., 2015) ⇒ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. (2015). “Neural Machine Translation by Jointly Learning to Align and Translate.” In: Proceedings of the Third International Conference on Learning Representations, (ICLR-2015).
2014
- (Cho et al., 2014a) ⇒ Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. (2014). “Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation”. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP-2014). arXiv:1406.1078.
2007
- (Koehn et al., 2007) ⇒ Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. (2007). “Moses: Open Source Toolkit for Statistical Machine Translation". In: Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions (ACL 2007).