Parallel Iterative Edit (PIE) System
(Redirected from PIE-GEC Sequence Tagging System)
Jump to navigation
Jump to search
A Parallel Iterative Edit (PIE) System is a interactive GEC Sequence Tagging System that predicts token-level edit operations and can solve local sequence transduction tasks.
- AKA: PIE-GEC System.
- Context:
- Source code available at: https://github.com/awasthiabhijeet/PIE
- It was developed by Awasthi et al. (2019).
- It can perform fast grammatical error correction using PIE Neural Network that is based on a BERT architecture.
- It uses a Seq2Edits algorithm to convert a sequence pair into in-place edits.
- Example(s):
- …
- Counter-Example(s):
- See: GEC Sequence Tagging System, Encoder-Decoder Neural Network, Seq2Seq Neural Network, Sequence Tagging System, Grammatical Error Correction System.
References
2021
- (GitHub. 2021) ⇒ https://github.com/awasthiabhijeet/PIE#pie-parallel-iterative-edit-models-for-local-sequence-transduction Retrieved: 2021-02-28.
- QUOTE: We present PIE, a BERT based architecture for local sequence transduction tasks like Grammatical Error Correction. Unlike the standard approach of modeling GEC as a task of translation from "incorrect" to "correct" language, we pose GEC as local sequence editing task. We further reduce local sequence editing problem to a sequence labeling setup where we utilize BERT to non-autoregressively label input tokens with edits. We rewire the BERT architecture (without retraining) specifically for the task of sequence editing. We find that PIE models for GEC are 5 to 15 times faster than existing state of the art architectures and still maintain a competitive accuracy.
2020
- (Omelianchuk et al., 2020) ⇒ Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem N. Chernodub, and Oleksandr Skurzhanskyi. (2020). “GECToR - Grammatical Error Correction: Tag, Not Rewrite.” In: Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications (BEA@ACL 2020).
- QUOTE: PIE (Awasthi et al., 2019) is an iterative sequence tagging GEC system that predicts token-level edit operations.
2019
- (Awasthi et al., 2019) ⇒ Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. (2019). “Parallel Iterative Edit Models for Local Sequence Transduction.” In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019).
- QUOTE: We take a fresh look at local sequence transduction task s and present a new parallel-iterative edit (PIE) architecture. Unlike the prevalent ED model that is constrained to sequentially generating the tokens in the output, the PIE model generates the output in parallel, thereby substantially reducing the latency of sequential decoding on long inputs.(...). The PIE model incorporates the following four ideas to achieve comparable accuracy on tasks like GEC in spite of parallel decoding.
- 1. Output edits instead of tokens: (...)
- 2. Sequence labeling instead of sequence generation: (...)
- 3. Iterative refinement: (...)
- 4. Factorize pre-trained bidirectional LMs: (...)