Grammatical Error Correction (GEC) System
A Grammatical Error Correction (GEC) System is a text error correction system that implements a GEC algorithm to solve a GEC task (to correct grammatical errors).
- Context:
- It can be supported by a Grammatical Error Detection System.
- It can range from being a Word-Level Grammatical Error Correction (GEC) System to being a Character-Level Grammatical Error Correction (GEC) System.
- …
- Example(s):
- an English GEC System, such as:
- one applied to a CoNLL-14 Benchmark Dataset, such as: (Ngou et al., 2014).
- GECToR (Omelianchuk et al., 2020).
- …
- Counter-Example(s):
- See: Text Sentence, Transcription Error, Seq2Seq Neural Network, Natural Language Processing System, Encoder-Decoder Neural Network.
References
2020
- (Omelianchuk et al., 2020) ⇒ Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. (2020). “GECToR–Grammatical Error Correction: Tag, Not Rewrite.” In: ACL 2020 Workshop on Innovative Use of NLP for Building Educational Applications.
2018
- (Chollampatt & Ng, 2018) ⇒ Shamil Chollampatt, and Hwee Tou Ng. (2018). “A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction.” In: Proceedings of the Thirty-Second Conference on Artificial Intelligence (AAAI-2018).
2017
- (Ji, Wang, et al., 2017) ⇒ Jianshu Ji, Qinlong Wang, Kristina Toutanova, Yongen Gong, Steven Truong, and Jianfeng Gao. (2017). “A Nested Attention Neural Hybrid Model for Grammatical Error Correction.” In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, (ACL-2017).
- ABSTRACT: Grammatical error correction (GEC) systems strive to correct both global errors in word order and usage, and local errors in spelling and inflection. Further developing upon recent work on neural machine translation, we propose a new hybrid neural model with nested attention layers for GEC. Experiments show that the new model can effectively correct errors of both types by incorporating word and character-level information, and that the model significantly outperforms previous neural models for GEC as measured on the standard CoNLL-14 benchmark dataset. Further analysis also shows that the superiority of the proposed model can be largely attributed to the use of the nested attention mechanism, which has proven particularly effective in correcting local errors that involve small edits in orthography.
2017b
- https://www.onlinecorrection.com/
- QUOTE: OnlineCorrection.com is a tool designed to find spelling, as well as basic grammar and stylistic mistakes, in English texts.
2017c
- https://github.com/atpaino/deep-text-corrector
- QUOTE: Deep Text Corrector uses TensorFlow to train sequence-to-sequence models that are capable of automatically correcting small grammatical errors in conversational written English (e.g. SMS messages). It does this by taking English text samples that are known to be mostly grammatically correct and randomly introducing a handful of small grammatical errors (e.g. removing articles) to each sentence to produce input-output pairs (where the output is the original sample), which are then used to train a sequence-to-sequence model.
While context-sensitive spell-check systems are able to automatically correct a large number of input errors in instant messaging, email, and SMS messages, they are unable to correct even simple grammatical errors. For example, the message "I'm going to store" would be unaffected by typical autocorrection systems, when the user most likely intendend to write "I'm going to the store". These kinds of simple grammatical mistakes are common in so-called "learner English", and constructing systems capable of detecting and correcting these mistakes has been the subject of multiple CoNLL shared tasks.
- QUOTE: Deep Text Corrector uses TensorFlow to train sequence-to-sequence models that are capable of automatically correcting small grammatical errors in conversational written English (e.g. SMS messages). It does this by taking English text samples that are known to be mostly grammatically correct and randomly introducing a handful of small grammatical errors (e.g. removing articles) to each sentence to produce input-output pairs (where the output is the original sample), which are then used to train a sequence-to-sequence model.
2014
- (Ngou et al., 2014) ⇒ Hwee T. Ngou, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. (2014). “The CoNLL-2014 Shared Task on Grammatical Error Correction.” In: Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task.