Lee-Krahmer-Wubben Data-To-Text Generation System
(Redirected from Lee-Krahmer-Wubben Data-To-Text Generation Training System)
Jump to navigation
Jump to search
A Lee-Krahmer-Wubben (LKW) Data-To-Text Generation System is a Data-to-Text Generation System that can solve a Lee-Krahmer-Wubben Data-To-Text Generation Task by implementing Lee-Krahmer-Wubben Data-To-Text Generation Algorithms.
- Context:
- It was developed by Lee et al. (2018).
- GitHub Repository: https://github.com/TallChris91/Automated-Template-Learning
- System's Architecture:
- Training and ML Tools:
- SR models were trained using a cosine similarity score method (Sec 3.2.1. Lee et al., 2018).
- MOSES toolkit (Koehn et al., 2007) for training the SMT models,
- Bayesian Optimization (Snoek et al., 2012) for tuning SMT+ NMT parameters for each corpus.
- OpenNMT-py toolkit (Klein et al.2017) for training the NMT models.
- Pre-trained word embeddings for boosting performance of smaller corpora (Qi et al., 2018).
- Example(s):
- …
- Counter-Example(s):
- See: Mention Generation System, Text Generation System, Natural Language Processing System, Natural Language Generation System, Natural Language Understanding System, Natural Language Inference System, Computing Benchmark System.
References
2018a
- (Lee et al., 2018) ⇒ Chris van der Lee, Emiel Krahmer, and Sander Wubben. (2018). “Automated Learning of Templates for Data-to-text Generation: Comparing Rule-based, Statistical and Neural Methods.” In: Proceedings of the 11th International Conference on Natural Language Generation (INLG 2018). DOI:http://dx.doi.org/10.18653/v1/W18-6504
- QUOTE: The current work investigated differences in output quality for data-to-text generation using ’direct’ data-to-text conversion and extended models (see figure 1). For this extended model, the input representation and the text examples in the train and development set were ’templatized’.(...)
- QUOTE: The current work investigated differences in output quality for data-to-text generation using ’direct’ data-to-text conversion and extended models (see figure 1). For this extended model, the input representation and the text examples in the train and development set were ’templatized’.(...)
2018b
- (Qi et al., 2018) ⇒ Ye Qi, Devendra Singh Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. (2018). “When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?". In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018) Volume 2 (Short Papers). DOI:10.18653/v1/N18-2084.
2017
(Klein et al., 2017) ⇒ Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. (2017)."OpenNMT: Open-Source Toolkit for Neural Machine Translation". In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017) System Demonstrations.
2012
- (Snoek et al., 2012) ⇒ Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. (2012). “Practical Bayesian Optimization of Machine Learning Algorithms.” In: Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS-2012).
2007
- (Koehn et al., 2007) ⇒ Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. (2007). “Moses: Open Source Toolkit for Statistical Machine Translation". In: Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions (ACL 2007).