Text Error Correction (TEC) Algorithm
(Redirected from TEC Algorithm)
Jump to navigation
Jump to search
A Text Error Correction (TEC) Algorithm is a string error correction algorithm that can be implemented by a text error correction system to solve a text error correction task.
- Context:
- It can range from being a Word/Token-level TEC Algorithm to being a Character-level TEC Algorithm.
- It can (typically) be a TEC Model Training Algorithm.
- Example(s):
- a Grammatical Error Correction Algorithm (for grammatical error correction tasks).
- a Orthographic Error Correction Algorithm (for orthographic error correction tasks).
- a Collocation Error Correction Algorithm (for collocation error correction tasks).
- a Neural-based TEC Algorithm, an LM-based TEC Algorithm.
- …
- Counter-Example(s):
- See: DNA Error Correction Algorithm..
References
2018b
- (Chollampatt & Ng, 2018) ⇒ Shamil Chollampatt, and Hwee Tou Ng. (2018). “A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction.” In: Proceedings of the Thirty-Second Conference on Artificial Intelligence (AAAI-2018).
- QUOTE: We improve automatic automatic correction of grammatical, orthographic, and collocation errors in text using a multilayer convolutional encoder-decoder neural network. ...
2018a
- (Bryant & Briscoe, 2018) ⇒ Christopher Bryant, and Ted Briscoe. (2018). “Language Model Based Grammatical Error Correction Without Annotated Training Data.” In: Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications.
- QUOTE: Since the end of the CoNLL-2014 shared task on grammatical error correction (GEC), research into language model (LM) based approaches to GEC has largely stagnated. In this paper, we re-examine LMs in GEC and show that it is entirely possible to build a simple system that not only requires minimal annotated data (~1000 sentences), but is also fairly competitive with several state-of-the-art systems. This approach should be of particular interest for languages where very little annotated training data exists, although we also hope to use it as a baseline to motivate future research.