2020 GECToRGrammaticalErrorCorrectio
- (Omelianchuk et al., 2020) ⇒ Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem N. Chernodub, and Oleksandr Skurzhanskyi. (2020). “GECToR - Grammatical Error Correction: Tag, Not Rewrite.” In: Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications (BEA@ACL 2020).
Subject Headings: GECToR; Grammatical Error Correction System, GEC Sequence Tagging System, LaserTagger, BERT Encoder.
Notes
- Source code available at: https://github.com/grammarly/gector
Cited By
- Google Scholar: ~ 14 Citations.
Quotes
Abstract
In this paper, we present a simple and efficient GEC sequence tagger using a Transformer encoder. Our system is pre-trained on synthetic data and then fine-tuned in two stages: first on errorful corpora, and second on a combination of errorful and error-free parallel corpora. We design custom token-level transformations to map input tokens to target corrections. Our best single-model/ensemble GEC tagger achieves an F0.5 of 65.3 / 66.5 on CONLL-2014 (test) and F0.5 of 72.4 / 73.6 on BEA-2019 (test). Its inference speed is up to 10 times as fast as a Transformer-based seq2seq GEC system. The code and [[trained model]]s are publicly available[1].
1. Introduction
Neural Machine Translation (NMT) - based approaches (Sennrich et al., 2016a) have become the preferred method for the task of Grammatical Error Correction (GEC)[2]. In this formulation, errorful sentences correspond to the source language, and error-free sentences correspond to the target language. Recently, Transformer-based (Vaswani et al., 2017) sequence-to-sequence (seq2seq) models have achieved state-of-the-art performance on standard GEC benchmarks (Bryant et al., 2019). Now the focus of research has shifted more towards generating synthetic data for pretraining the Transformer-NMT-based GEC systems (Grundkiewicz et al., 2019; Kiyono et al., 2019). NMTbased GEC systems suffer from several issues which make them inconvenient for real world deployment: (i) slow inference speed, (ii) demand for large amounts of training data and (iii) interpretability and explainability; they require additional functionality to explain corrections, e.g., grammatical error type classification (Bryant et al., 2017).
In this paper, we deal with the aforementioned issues by simplifying the task from sequence generation to sequence tagging. Our GEC sequence tagging system consists of three training stages: pretraining on synthetic data, fine-tuning on an errorful parallel corpus, and finally, fine-tuning on a combination of errorful and error-free parallel corpora.
Related Work
LaserTagger (Malmi et al., 2019) combines a BERT encoder with an autoregressive Transformer decoder to predict three main edit operations: keeping a token, deleting a token, and adding a phrase before a token. PIE (Awasthi et al., 2019) is an iterative sequence tagging GEC system that predicts token-level edit operations. While their approach is the most similar to ours, our work differs from theirs as described in our contributions below:
- 1. We develop custom g-transformations: token-level edits to perform (g)rammatical error corrections. Predicting g-transformations instead of regular tokens improves the generalization of our GEC sequence tagging system.
- 2. We decompose the fine-tuning stage into two stages: fine-tuning on errorful-only sentences and further fine-tuning on a small, high-quality dataset containing both errorful and error-free sentences.
- 3. We achieve superior performance by incorporating a pre-trained Transformer encoder in our GEC sequence tagging system. In our experiments, encoders from XLNet and RoBERTa outperform three other cutting-edge Transformer encoders (ALBERT, BERT, and GPT-2).
2. Datasets
Table 1 describes the finer details of datasets used for different training stages.
Dataset | # sentences | % errorful sentences | Training stage |
---|---|---|---|
PIE-synthetic | 9,000,000 | 100.0% | I |
Lang-8 | 947,344 | 52.5% | II |
NUCLE | 56,958 | 38.0% | II |
FCE | 34,490 | 62.4% | II |
W&I+LOCNESS | 34,304 | 67.3% | II, III |
Synthetic Data
For pretraining stage I, we use 9M parallel sentences with synthetically generated grammatical errors (Awasthi et al., 2019) [3].
Training Data
We use the following datasets for fine-tuning stages II and III: National University of Singapore Corpus of Learner English (NUCLE)[4] (Dahlmeier et al., 2013), Lang-8 Corpus of Learner English (Lang-8)[5] (Tajiri et al., 2012), FCE dataset[6] (Yannakoudakis et al., 2011), the publicly available part of the Cambridge Learner Corpus (Nicholls, 2003) and Write & Improve + LOCNESS Corpus (Bryant et al., 2019)[7].
Evaluation Data
We report results on CoNLL2014 test set (Ng et al., 2014) evaluated by official M2 scorer (Dahlmeier and Ng, 2012), and on BEA-2019 dev and test sets evaluated by ERRANT (Bryant et al., 2017).
3. Token-Level Transformations
We developed custom token-level transformations $T(x_i)$ to recover the target text by applying them to the source tokens $\left(x_1 \ldots x_N\right)$. Transformations increase the coverage of grammatical error corrections for limited output vocabulary size for the most common grammatical errors, such as Spelling, Noun Number, Subject-Verb Agreement and Verb Form (Yuan, 2017, p.28).
The edit space which corresponds to our default tag vocabulary size = 5000 consists of 4971 basic transformations (token-independent KEEP
, DELETE
and 1167 token-dependent APPEND
, 3802 REPLACE
) and 29 token-independent g-transformations.
Basic transformations perform the most common token-level edit operations, such as: keep the current token unchanged (tag $KEEP
), delete current token (tag $DELETE
), append new token $t_1$ next to the current token $x_i$ (tag $APPEND.$t_1$
) or replace the current token $x_i$ with another token $t_2$ (tag $REPLACE.$t_2$
)
g-transformations perform task-specific operations such as: change the case of the current token (CASE
tags), merge the current token and the next token into a single one (MERGE
tag]]s) and split the current token into two new tokens (SPLIT
tags). Moreover, tags from NOUN NUMBER
and VERB FORM
transformations encode grammatical properties for tokens. For instance, these transformations include conversion of singular nouns to plurals and vice versa or even change the form of regular/irregular verbs to express a different number or tense.
To obtain the transformation suffix for the VERB FORM
tag, we use the verb conjugation dictionary[8]. For convenience, it was converted into the following format: $token_0\dot token_1: tag_0 tag_1$
(e.g., go goes: $V\;B \cdot V\; BZ$
). This means that there is a transition from $word_0$
and $word_1
to the respective tags. The transition is unidirectional, so if there exists a reverse transition, it is presented separately.
The experimental comparison of covering capabilities for our token-level transformations is in Table 2. All transformation types with examples are listed in Appendix, Table 9.
Tag vocab. size | Transformations | |
---|---|---|
Basic transf. | All transf. | |
100 | 60.4% | 79.7% |
1000 | 76.4% | 92.9% |
5000 | 89.5% | 98.1% |
10000 | 93.5% | 100.0% |
Preprocessing
To approach the task as a sequence tagging problem we need to convert each target sentence from training/evaluation sets into a sequence of tags where each tag is mapped to a single source token. Below is a brief description of our 3-step preprocessing algorithm for color-coded sentence pair from Table 3:
Step 1). Map each token from source sentence to subsequence of tokens from target sentence. [A ↦ A], [ten ↦ ten, -], [years ↦ year, -], [old ↦ old], [go ↦ goes, to], [school ↦ school, .].
For this purpose, we first detect the minimal spans of tokens which define differences between source tokens $\left(x_1 \ldots x_N \right)$ and target tokens $\left(y_1 \ldots y_M\right)$. Thus, such a span is a pair of selected source tokens and corresponding target tokens. We can’t use these span-based alignments, because we need to get tags on the token level. So then, for each [[source token $x_i$, $1 \leq i \leq N$ we search for best-fitting subsequence $\Upsilon_i = \left(y_{j1} \ldots y_{j2} \right)$, $1 \leq j_1 \leq j_2 \leq M$ of target tokens by minimizing the modified Levenshtein distance (which takes into account that successful g-transformation is equal to zero distance)
Step 2). For each mapping in the list, find token-level transformations which convert source token to the target subsequence: [A ↦ A]: $KEEP, [ten ↦ ten, -]: $KEEP, $MERGE HYPHEN, [years ↦ year, -]: $NOUN NUMBER SINGULAR, $MERGE HYPHEN], [old ↦ old]: $KEEP, [go ↦ goes, to]: $VERB FORM VB VBZ, $APPEND to, [school ↦ school, .]: $KEEP, $APPEND {.}].
Step 3). Leave only one transformation for each source token: A ⇔ $KEEP, ten ⇔ $MERGE HYPHEN, years ⇔ $NOUN NUMBER SINGULAR, old ⇔ $KEEP, go ⇔ $VERB FORM VB VBZ, school ⇔ $APPEND {.}.
The iterative sequence tagging approach adds a constraint because we can use only a single tag for each token. In case of multiple transformations we take the first transformation that is not a $KEEP tag. For more details, please, see the preprocessing script in our repository[9].
Iteration | # Sentence’s evolution | # corr. |
---|---|---|
Orig. sent | A ten years old boy go school | - |
Iteration 1 | A ten-years old boy goes school | 2 |
Iteration 2 | A ten-year-old boy goes to school | 5 |
Iteration 3 | A ten-year-old boy goes to school. | 6 |
4. Tagging Model Architecture
Our GEC sequence tagging model is an encoder made up of pretrained BERT-like transformer stacked with two linear layers with softmax layers on the top. We always use cased pretrained transformers in their Base configurations. Tokenization depends on the particular transformer's design]]: BPE (Sennrich et al., 2016b) is used in RoBERTa, WordPiece (Schuster and Nakajima, 2012) in BERT and SentencePiece (Kudo and Richardson, 2018) in XLNet. To process the information at the token-level, we take the first subword per token from the encoder's representation, which is then forwarded to subsequent linear layers, which are responsible for error detection and error tagging, respectively
5. Iterative Sequence Tagging Approach
To correct the text, for each input token $x_i,\; 1 \leq i \leq N $from the source sequence $\left(x_1 \ldots x_N \right)$, we predict the tag-encoded token-level transformation $T\left(x_i\right)$ described in Section 3. These predicted tag-encoded transformations are then applied to the sentence to get the modified sentence.
Since some corrections in a sentence may depend on others, applying GEC sequence tagger only once may not be enough to fully correct the sentence. Therefore, we use the iterative correction approach from (Awasthi et al., 2019): we use the GEC sequence tagger to tag the now modified sequence, and apply the corresponding transformations on the new tags, which changes the sentence further (see an example in Table 3). Usually, the number of corrections decreases with each successive iteration, and most of the corrections are done during the first two iterations (Table 4). Limiting the number of iterations speeds up the overall pipeline while trading off qualitative performance.
Iteration# | P | R | F0.5 | # corr. |
---|---|---|---|---|
Iteration 1 | 72.3 | 38.6 | 61.5 | 787 |
Iteration 2 | 73.7 | 41.1 | 63.6 | 934 |
Iteration 3 | 74.0 | 41.5 | 64.0 | 956 |
Iteration 4 | 73.9 | 41.5 | 64.0 | 958 |
6. Experiments
Training Stages
We have 3 training stages (details of data usage are in Table1):
: I Pre-training on synthetic errorful sentences as in (Awasthi et al., 2019).
- II Fine-tuning on errorful-only sentences.
- III Fine-tuning on subset of errorful and error-free sentences as in (Kiyono et al., 2019).
We found that having two fine-tuning stages with and without error-free sentences is crucial for performance (Table 5).
Training stage # | CoNLL-2014 (test) | BEA-2019 (dev) | ||||
---|---|---|---|---|---|---|
P | R | F0.5 | P | R | F0.5 | |
Stage I. | 55.4 | 35.9 | 49.9 | 37.0 | 23.6 | 33.2 |
Stage II. | 64.4 | 46.3 | 59.7 | 46.4 | 37.9 | 44.4 |
Stage III. | 66.7 | 49.9 | 62.5 | 52.6 | 43.0 | 50.3 |
Inf. tweaks | 77.5 | 40.2 | 65.3 | 66.0 | 33.8 | 55.5 |
All our models were trained by Adam optimizer (Kingma and Ba, 2015) with default hyperparameters. Early stopping was used; stopping criteria was 3 epochs of 10K updates each without improvement. We set batch size=256 for pre-training stage (20 epochs) and batch size=128 for fine-tuning stages II and III (2-3 epochs each). We also observed that freezing the encoder's weights for the first 2 epochs on training stages I-II and using a batch size greater than 64 improves the convergence and leads to better GEC performance.
Encoders from Pretrained Transformers
We fine-tuned BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), GPT-2 (Radford et al., 2019), XLNet (Yang et al., 2019), and ALBERT (Lan et al., 2019) with the same hyperparameters setup. We also added LSTM with randomly initialized embeddings (dim = 300) as a baseline. As follows from Table 6, encoders from fine-tuned Transformers significantly outperform LSTMs. BERT, RoBERTa and XLNet encoders perform better than GPT-2 and ALBERT, so we used them only in our next experiments. All models were trained out-of-the-box[10] which seems to not work well for GPT-2. We hypothesize that encoders from Transformers which were pretrained as a part of the entire encoder-decoder pipeline are less useful for GECToR.
Encoder | CoNLL-2014 (test) | BEA-2019 (dev) | ||||
---|---|---|---|---|---|---|
P | R | F0.5 | P | R | F0.5 | |
LSTM | 51.6 | 15.3 | 35.0 | - | - | - |
ALBERT | 59.5 | 31.0 | 50.3 | 43.8 | 22.3 | 36.7 |
BERT | 65.6 | 36.9 | 56.8 | 48.3 | 29.0 | 42.6 |
GPT-2 | 61.0 | 6.3 | 22.2 | 44.5 | 5.0 | 17.2 |
RoBERTa | 67.5 | 38.3 | 58.6 | 50.3 | 30.5 | 44.5 |
XLNet | 64.6 | 42.6 | 58.5 | 47.1 | 34.2 | 43.8 |
Tweaking the Inference
We forced the model to perform more precise corrections by introducing two inference hyperparameters (see Appendix, Table 11), hyperparameter values were found by random search on BEA-dev.
First, we added a permanent positive confidence bias to the probability of $KEEP tag which is responsible for not changing the source token. Second, we added a sentence-level minimum error probability threshold for the output of the error detection layer. This increased precision by trading off recall and achieved better F0.5 scores (Table 5).
Finally, our best single-model, GECToR (XLNet) achieves F0.5 = 65.3 on CoNLL-2014 (test) and F0.5 = 72.4 on BEA-2019 (test). Best ensemble model, GECToR (BERT + RoBERTa + XLNet) where we simply average output probabilities from 3 single models achieves F0.5 = 66.5 on CoNLL2014 (test) and F0.5 = 73.6 on BEA-2019 (test), correspondingly (Table 7).
GEC system | Ens. | CoNLL-2014 (test) | BEA-2019 (test) | ||||
---|---|---|---|---|---|---|---|
P | R | F0.5 | P | R | F0.5 | ||
Zhao et al.(2019) | 67.7 | 40.6 | 59.8 | − | − | − | |
Awasthi et al. (2019) | 66.1 | 43.0 | 59.7 | − | − | − | |
Kiyono et al. (2019) | 67.9 | 44.1 | 61.3 | 65.5 | 59.4 | 64.2 | |
Zhao et al. (2019) | ✓ | 74.1 | 36.3 | 61.3 | − | − | − |
Awasthi et al. (2019) | ✓ | 68.3 | 43.2 | 61.2 | − | − | − |
Kiyono et al. (2019) | ✓ | 72.4 | 46.1 | 65.0 | 74.7 | 56.7 | 70.2 |
Kantor et al. (2019) | ✓ | − | − | − | 78.3 | 58.0 | 73.2 |
GECToR (BERT) | 72.1 | 42.0 | 63.0 | 71.5 | 55.7 | 67.6 | |
GECToR (RoBERTa) | 73.9 | 41.5 | 64.0 | 77.2 | 55.1 | 71.5 | |
GECToR (XLNet) | 77.5 | 40.1 | 65.3 | 79.2 | 53.9 | 72.4 | |
GECToR(RoBERTa+ XLNet) | ✓ | 76.6 | 42.3 | 66.0 | 79.4 | 57.2 | 73.7 |
GECToR(BERT+RoBERTa+XLNet) | ✓ | 78.2 | 41.5 | 66.5 | 78.9 | 58.2 | 73.6 |
Speed Comparison
We measured the model's average inference time on NVIDIA Tesla V100 on batch size 128. For sequence tagging we don't need to predict corrections one-by-one as in autoregressive transformer decoders, so inference is naturally parallelizable and therefore runs many times faster. Our sequence tagger's inference speed is up to 10 times as fast as the state-of-the-art Transformer from Zhao et al. (2019), beam size=12 (Table 8).
GEC system | Time (sec) |
---|---|
Transformer-NMT, beam size = 12 | 4.35 |
Transformer-NMT, beam size = 4 | 1.25 |
Transformer-NMT, beam size = 1 | 0.71 |
GECToR (XLNet), 5 iterations | 0.40 |
GECToR (XLNet), 1 iteration | 0.20 |
7. Conclusions
We show that a faster, simpler, and more efficient GEC system can be developed using a sequence tagging approach, an encoder from a pretrained Transformer, custom transformations and 3-stage training.
Our best single-model/ensemble GEC tagger achieves an F0.5 of 65.3/66.5 on CoNLL-2014 (test) and F0.5 of 72.4/73.6 on BEA-2019 (test). We achieve state-of-the-art results for the GEC task with an inference speed up to 10 times as fast as Transformer-based seq2seq systems
8. Acknowledgements
This research was supported by Grammarly. We thank our colleagues Vipul Raheja, Oleksiy Syvokon, Andrey Gryshchuk and our ex-colleague Maria Nadejde who provided insight and expertise that greatly helped to make this paper better. We would also like to show our gratitude to Abhijeet Awasthi and Roman Grundkiewicz for their support in providing data and answering related questions. We also thank 3 anonymous reviewers for their contribution.
Footnotes
- ↑ https://github.com/grammarly/gector
- ↑ http://nlpprogress.com/english/grammatical_error_correction.html
- ↑ https://github.com/awasthiabhijeet/PIE/tree/master/errorify
- ↑ https://www.comp.nus.edu.sg/~nlp/corpora.html
- ↑ https://sites.google.com/site/naistlang8corpora
- ↑ https://ilexir.co.uk/datasets/index.html
- ↑ https://www.cl.cam.ac.uk/research/nl/bea2019st/data/wi+locness_v2.1.bea19.tar.gz
- ↑ https://github.com/gutfeeling/word_forms/blob/master/word_forms/en-verbs.txt
- ↑ https://github.com/grammarly/gector
- ↑ https://huggingface.co/transformers/
A. Appendix
id | Core transformation | Transformation suffix | Tag | Example |
---|---|---|---|---|
basic-1 | KEEP | ∅ | $KEEP | . . . many people want to travel during the summer . . . |
basic-2 | DELETE | ∅ | $DELETE | . . . not sure if you are {you ⇒ ∅} gifting . . . |
basic-3 | REPLACE | a | $REPLACE_a | . . . the bride wears {the ⇒ a} white dress . . . |
. . . | . . . | . . . | . . . | . . . |
basic-3804 | REPLACE | cause | $REPLACE_cause | . . . hope it does not {make ⇒ cause} any trouble . . . |
basic-3805 | APPEND | for | $APPEND_for | . . . he is {waiting ⇒ waiting for} your reply . . . |
. . . | . . . | . . . | . . . | . . . |
basic-4971 | APPEND | know | $APPEND_know | . . . I {don’t ⇒ don’t know} which to choose. . . |
g-1 | CASE | CAPITAL | $CASE_CAPITAL | . . . surveillance is on the {internet ⇒ Internet} . . . |
g-2 | CASE | CAPITAL_1 | $CASE_CAPITAL_1 | . . . I want to buy an {iphone ⇒ iPhone} . . . |
g-3 | CASE | LOWER | $CASE_LOWER | . . . advancement in {Medical ⇒ medical} technology . . . |
g-4 | CASE | UPPER | $CASE_UPPER | . . . the {it ⇒ IT} department is concerned that. . . |
g-5 | MERGE | SPACE | $MERGE_SPACE | . . . insert a special kind of gene {in to ⇒ into} the cell . . . |
g-6 | MERGE | HYPHEN | $MERGE_HYPHEN | . . . and needs {in depth ⇒ in-depth} search . . . |
g-7 | SPLIT | HYPHEN | $SPLIT_HYPHEN | . . . support us for a {long-run ⇒ long run} . . . |
g-8 | NOUN_NUMBER | SINGULAR | $NOUN_NUMBER_SINGULAR | . . . a place to live for their {citizen ⇒ citizens} |
g-9 | NOUN_NUMBER | PLURAL | $NOUN_NUMBER_PLURAL | . . . carrier of this {diseases ⇒ disease} . . . |
g-10 | VERB FORM | VB_VBZ | $VERB_FORM_VB_VBZ | . . . going through this {make ⇒ makes} me feel . . . |
g-11 | VERB FORM | VB_VBN | $VERB_FORM_VB_VBN | . . . to discuss what {happen ⇒ happened} in fall . . . |
g-12 | VERB FORM | VB_VBD | $VERB_FORM_VB_VBD | . . . she sighed and {draw ⇒ drew} her . . . |
g-13 | VERB FORM | VB_VBG | $VERB_FORM_VB_VBG | . . . shown success in {prevent ⇒ preventing} such . . . |
g-14 | VERB FORM | VB_VBZ | $VERB_FORM_VB_VBZ | . . . a small percentage of people {goes ⇒ go} by bike . . . |
g-15 | VERB FORM | VBZ_VBN | $VERB_FORM_VBZ_VBN | . . . development has {pushes ⇒ pushed} countries to . . . |
g-16 | VERB FORM | VBZ_VBD | $VERB_FORM_VBZ_VBD | . . . he {drinks ⇒ drank} a lot of beer last night . . . |
g-17 | VERB FORM | VBZ_VBG | $VERB_FORM_VBZ_VBG | . . . couldn’t stop {thinks ⇒ thinking} about it . . . |
g-18 | VERB FORM | VBN_VB | $VERB_FORM_VBN_ VB | . . . going to {depended ⇒ depend} on who is hiring . . . |
g-19 | VERB FORM | VBN_VBZ | $VERB_FORM_VBN_ VBZ | . . . yet he goes and {eaten ⇒ eats} more melons . . . |
g-20 | VERB FORM | VBN_VBD | $VERB_FORM_VBN_ VBD | . . . he {driven ⇒ drove} to the bus stop and . . . |
g-21 | VERB FORM | VBN_VBG | $VERB_FORM_VBN_ VBG | . . . don’t want you fainting and {broken ⇒ breaking} . . . |
g-22 | VERB FORM | VBD_VB | $VERB_FORM_VBD_ VB | . . . each of these items will {fell ⇒ fall} in price . . . |
g-23 | VERB FORM | VBD_VBZ | $VERB_FORM_VBD_ VBZ | . . . the lake {froze ⇒ freezes} every year . . . |
g-24 | VERB FORM | VBD_VBN | $VERB_FORM_VBD_VBN | . . . he has been went {went ⇒ gone} since last week . . . |
g-25 | VERB FORM | VBD_VBG | $VERB_FORM_VBD_VBG | . . . talked her into {gave ⇒ giving} me the whole day . . . |
g-26 | VERB FORM | VBG_VB | $VERB_FORM_VBG_VB | . . . free time, I just {enjoying ⇒ enjoy} being outdoors . . . |
g-27 | VERB FORM | VBG_VBZ | $VERB_FORM_VBG_ VBZ | . . . there still {existing ⇒ exists} many inevitable factors . . . |
g-28 | VERB FORM | VBG_VBN | $VERB_FORM_VBG_VBN | . . . people are afraid of being {tracking ⇒ tracked} . . . |
g-29 | VERB FORM | VBG_VBD | $VERB_FORM_VBG_ VBD | . . . there was no {mistook ⇒ mistaking} his sincerity . . . |
Training stage # | CoNLL-2014 (test) | BEA-2019 (dev) | ||||
---|---|---|---|---|---|---|
P | R | F0.5 | P | R | F0.5 | |
Stage I. | 57.8 | 33.0 | 50.2 | 40.8 | 22.1 | 34.9 |
Stage II. | 68.1 | 42.6 | 60.8 | 51.6 | 33.8 | 46.7 |
Stage III. | 68.8 | 47.1 | 63.0 | 54.2 | 41.0 | 50.9 |
Inf. tweaks | 73.9 | 41.5 | 64.0 | 62.3 | 35.1 | 54.0 |
System name | Confidence bias | Minimum error probability |
---|---|---|
GECToR (BERT) | 0.10 | 0.41 |
GECToR (RoBERTa) | 0.20 | 0.50 |
GECToR (XLNet) | 0.35 | 0.66 |
GECToR (RoBERTa + XLNet) | 0.24 | 0.45 |
GECToR (BERT + RoBERTa + XLNet) | 0.16 | 0.40 |
References
2019a
- (Awasthi et al., 2019) ⇒ Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. (2019). “Parallel Iterative Edit Models for Local Sequence Transduction.” In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019).
2019b
- (Bryant et al. , 2019) ⇒ Christopher Bryant, Mariano Felice, Oistein E. Andersen, and Ted Briscoe (2019). "The BEA-2019 Shared Task On Grammatical Error Correction". In: Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics.
2019c
- (Devlin et al., 2019) ⇒ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. (2019). “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019), Volume 1 (Long and Short Papers). DOI:10.18653/v1/N19-1423. arXiv:1810.04805
2019d
- (Grundkiewicz et al., 2019) ⇒ Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield (2019). “Neural Grammatical Error Correction Systems With Unsupervised Pre-Training on Synthetic Data". In: Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 252–263.
2019e
- (Kantor et al., 2019) ⇒ Yoav Kantor, Yoav Katz, Leshem Choshen, Edo CohenKarlik, Naftali Liberman, Assaf Toledo, Amir Menczel, and Noam Slonim (2019). ["Learning to combine Grammatical Error Corrections"]. In: Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 139–148, Florence, Italy. Association for Computational Linguistics.
2019f
- (Kiyono et al., 2019) ⇒ Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, and Kentaro Inui (2019). "An Empirical Study Of Incorporating Pseudo Data Into Grammatical Error Correction". In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 1236–1242, Hong Kong, China. Association for Computational Linguistics.
2019g
- (Lan et al., 2019) ⇒ Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut (2019). "ALBERT: A Lite BERT for Self-supervised Learning of Language Representations". arXiv preprint arXiv:1909.11942.
2019h
- (Liu et al., 2019) ⇒ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. (2019). “RoBERTa: A Robustly Optimized BERT Pretraining Approach.” In: CoRR, abs/1907.11692.
2019i
- (Malmi et al., 2019) ⇒ Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. (2019). “Encode, Tag, Realize: High-Precision Text Editing.” In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019).
2019j
- (Radford et al., 2019) ⇒ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. (2019). “Language Models Are Unsupervised Multitask Learners.” In: OpenAI Blog Journal, 1(8).
2019k
- (Yang et al., 2019) ⇒ Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R. Salakhutdinov, and Quoc V. Le (2019). "XLNet: Generalized Autoregressive Pretraining for Language Understanding". In:Advances in Neural Information Processing Systems 32 (NeurIPS 2019), pages 5754–5764.
2019l
- (Zhao et al., 2019) ⇒ Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu (2019). "Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data". In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 156–165, Minneapolis, Minnesota. Association for Computational Linguistics.
2018
- (Kudo & Richardson, 2018) ⇒ Taku Kudo, and John Richardson. (2018). “SentencePiece: A Simple and Language Independent Subword Tokenizer and Detokenizer for Neural Text Processing". In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018) System Demonstrations. DOI:10.18653/v1/d18-2012.
2017a
- (Bryant et al.,2017) ⇒ Christopher Bryant, Mariano Felice, and Ted Briscoe (2017). "Automatic Annotation and Evaluation of Error Types for Grammatical Error Correction". In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 793–805, Vancouver, Canada. Association for Computational Linguistics.
2017b
- (Vaswani et al., 2017) ⇒ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. (2017). “Attention is all You Need.” In: Advances in Neural Information Processing Systems.
2017c
- (Yuan, 2017) ⇒ Zheng Yuan (2017). “Grammatical Error Correction in Non-Native English". Technical report, University of Cambridge, Computer Laboratory.
2016a
- (Sennrich et al., 2016a)⇒ Rico Sennrich, Barry Haddow, and Alexandra Birch (2016a). "Edinburgh Neural Machine Translation Systems for WMT 16". In: Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 371–376, Berlin, Germany. Association for Computational Linguistics.
2016b
- (Sennrich et al., 2016b) ⇒ Rico Sennrich, Barry Haddow, and Alexandra Birch. (2016b). “Neural Machine Translation of Rare Words with Subword Units.” In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL-2016).
2015
- (Kingma & Ba, 2015) ⇒ Diederik P. Kingma, and Jimmy Ba. (2015). “Adam: A Method for Stochastic Optimization.” In: Proceedings of the 3rd International Conference for Learning Representations (ICLR-2015).
2014
- (Ng et al., 2014) ⇒ Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. (2014). “The CoNLL-2014 Shared Task on Grammatical Error Correction.” In: Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task.
2013
- (Dahlmeier et al., 2013) ⇒ Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu (2013). "Building a Large Annotated Corpus of Learner English: The NUS Corpus Of Learner English". In: Proceedings of the eighth workshop on innovative use of NLP for building educational applications, pages 22–31.
2012a
- (Dahlmeier & Ng, 2012) ⇒ Daniel Dahlmeier and Hwee Tou Ng (2012). "Better Evaluation for Grammatical Error Correction". In: Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 568–572. Association for Computational Linguistics.
2012b
- (Schuster & Nakajima, 2012) ⇒ Mike Schuster and Kaisuke Nakajima (2012). "Japanese and Korean Voice Search". In: 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149–5152. IEEE.
2012c
- (Tajiri et al., 2012)⇒ Toshikazu Tajiri, Mamoru Komachi, and Yuji Matsumoto (2012). [https://www.aclweb.org/anthology/P12-2039/ "Tense and Aspect Error Correction for ESL Learners Using Global Context"". In: Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 198–202. Association for Computational Linguistics.
2011
- (Yannakoudakis et al., 2011) ⇒ Helen Yannakoudakis, Ted Briscoe, and Ben Medlock (2011). "A New Dataset and Method for Automatically Grading ESOL Texts". In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 180–189. Association for Computational Linguistics.
2003
- (Nicholls, 2003) ⇒ Diane Nicholls (2003). “The cambridge Learner Corpus: Error Coding And Analysis For Lexicography and ELT". In: Proceedings of the Corpus Linguistics 2003 Conference, volume 16, pages 572–581.
BibTeX
@inproceedings{2020_GECToRGrammaticalErrorCorrectio, author = {Kostiantyn Omelianchuk and Vitaliy Atrasevych and Artem N. Chernodub and Oleksandr Skurzhanskyi}, editor = {Jill Burstein and Ekaterina Kochmar and Claudia Leacock and Nitin Madnani and Ildiko Pilan and Helen Yannakoudakis and Torsten Zesch}, title = {GECToR - Grammatical Error Correction: Tag, Not Rewrite}, booktitle = {Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications (BEA@ACL 2020)}, pages = {163--170}, publisher = {Association for Computational Linguistics}, year = {2020}, url = {https://doi.org/10.18653/v1/2020.bea-1.16}, doi = {10.18653/v1/2020.bea-1.16}, }
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2020 GECToRGrammaticalErrorCorrectio | Kostiantyn Omelianchuk Vitaliy Atrasevych Artem N. Chernodub Oleksandr Skurzhanskyi | GECToR - Grammatical Error Correction: Tag, Not Rewrite | 2015 2020 |