Automated Language Generation (NLG) System
(Redirected from Automatic Text Generation System)
Jump to navigation
Jump to search
An Automated Language Generation (NLG) System is an automated writing system that implements an NLG algorithm to solve an NLG task.
- AKA: Natural Language Generator.
- Context:
- It can (typically) apply an NLG Model.
- It can range from being a General NLG System to being a Domain-Specific NLG System.
- It can range from being a Written NLG System (Text-Output NLG System, Handwritten NLG System) to being a Voice-Output NLG System (which may use a text-to-speech system).
- It can range from being a Data-Driven Text Generation System to being a Heuristic Text Generation System.
- It can be supported by an NLG Platform.
- It can incorporate machine learning techniques such as deep learning and reinforcement learning.
- It can be designed for various purposes such as content creation, summarization, translation, or dialogue generation.
- It can face challenges related to coherence, factual accuracy, and ethical considerations.
- It can be evaluated using both automatic metrics and human evaluation.
- ...
- Example(s):
- Model-Based NLG Systems, such as:
- Language Model-based NLG System, such as a GPT2-based NLG System.
- Neural-based NLG System, such as a GPT2-based NLG System.
- Task-Specific NLG Systems, such as:
- Commercial NLG Systems, such as:
- Research-Oriented NLG Systems, such as:
- Open-Source NLG Implementations, such as:
- NLG Benchmarking Systems, such as Texygen (Texygen Benchmark Task).
- General Purpose NLG Systems, such as an Automated Writing System.
- ...
- Model-Based NLG Systems, such as:
- Counter-Example(s):
- See: Generation System, Text Editing System, Linguistic Item Completion System.
References
2018a
- (Clark et al., 2018) ⇒ Elizabeth Clark, Yangfeng Ji, and Noah A. Smith. (2018). “Neural Text Generation in Stories Using Entity Representations As Context.” In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Volume 1 (Long Papers). DOI:10.18653/v1/N18-1204.
2018b
- (Fedus et al., 2018) ⇒ William Fedus, Ian Goodfellow, and Andrew M Dai. (2018). “MaskGAN: Better Text Generation via Filling in the ________". In: Proceedings of the Sixth International Conference on Learning Representations (ICLR-2018).
2018c
- (Guo et al., 2018) ⇒ Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. (2018). “Long Text Generation via Adversarial Training with Leaked Information.” In: Proceedings of the Thirty-Second (AAAI) Conference on Artificial Intelligence (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th (AAAI) Symposium on Educational Advances in Artificial Intelligence (EAAI-18).
2018d
- (Kudo & Richardson, 2018) ⇒ Taku Kudo, and John Richardson. (2018). “SentencePiece: A Simple and Language Independent Subword Tokenizer and Detokenizer for Neural Text Processing.” In: arXiv preprint arXiv:1808.06226.
2018e
- (Lee et al., 2018) ⇒ Chris van der Lee, Emiel Krahmer, and Sander Wubben. (2018). “Automated Learning of Templates for Data-to-text Generation: Comparing Rule-based, Statistical and Neural Methods.” In: Proceedings of the 11th International Conference on Natural Language Generation (INLG 2018). DOI:http://dx.doi.org/10.18653/v1/W18-6504
2018f
- (Song et al., 2018) ⇒ Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. (2018). “A Graph-to-Sequence Model for AMR-to-Text Generation.” In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018) Volume 1: Long Papers. DOI:10.18653/v1/P18-1150
2018g
- (Zhu et al., 2018) ⇒ Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. (2018). “Texygen: A Benchmarking Platform for Text Generation Models.” In: Proceedings of The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR 2018). DOI:10.1145/3209978.3210080.
2017a
- (Zhang et al., 2017) ⇒ Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. (2017). “Adversarial Feature Matching for Text Generation". In: Proceedings of the 34th International Conference on Machine Learning (ICML 2017).
2017b
- (Li et al., 2017) ⇒ Jiwei Li, Will Monroe, Tianlin Shi, Sebastien Jean, Alan Ritter, and Dan Jurafsky. (2017). “Adversarial Learning for Neural Dialogue Generation.” In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017). DOI:10.18653/v1/D17-1230.
2017c
- (Lin, Li, et al., 2017) ⇒ Kevin Lin, Dianqi Li, Xiaodong He, Ming-ting Sun, and Zhengyou Zhang. (2017). “Adversarial Ranking for Language Generation.” In: Proceedings of Advances in Neural Information Processing Systems 30 (NIPS-2017).
2017d
- (Che et al., 2017) ⇒ Tong Che, Yanran Li, Ruixiang Zhang, R. Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. (2017). “Maximum-Likelihood Augmented Discrete Generative Adversarial Networks.” In: ArXiv Preprint: 1702.07983.
2017e
- (Semeniuta et al., 2017) ⇒ Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. (2017). “A Hybrid Convolutional Variational Autoencoder for Text Generation.” In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017). DOI:10.18653/v1/D17-1066.
2017f
- (Yu et al., 2017a) ⇒ Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. (2017). “SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient.” In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI 2017).