2019 ParallelIterativeEditModelsforL

From GM-RKB
Revision as of 05:26, 27 February 2021 by Maintenance script (talk | contribs) (Imported from text file)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Subject Headings: Pie-Gec Sequence Tagging System; Parallel Iterative Edit (Pie) Neural Network; Seq2edits; Pie -Gec Sequence Tagging System

Notes

Cited By


Quotes

Abstract

We present a Parallel Iterative Edit (PIE) model for the problem of local sequence transduction arising in tasks like Grammatical error correction (GEC). Recent approaches are based on the popular encoder-decoder (ED) model for sequence to sequence learning. The ED model auto-regressively captures full dependency among output tokens but is slow due to sequential decoding. The PIE model does parallel decoding, giving up the advantage of modeling full dependency in the output, yet it achieves accuracy competitive with the ED model for four reasons: 1. predicting edits instead of tokens, 2. labeling sequences instead of generating sequences, 3. iteratively refining predictions to capture dependencies, and 4. factorizing logits over edits and their token argument to harness pre-trained language models like BERT. Experiments on tasks spanning GEC, OCR correction and spell correction demonstrate that the PIE model is an accurate and significantly faster alternative for local sequence transduction.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2019 ParallelIterativeEditModelsforLAbhijeet Awasthi
Rasna Goyal
Sabyasachi Ghosh
Vihari Piratla
Sunita Sarawagi
Parallel Iterative Edit Models for Local Sequence Transduction2019