2020 BytePairEncodingisSuboptimalfor

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Byte-Pair Encoding (BPE) Algorithm; Subword Tokenization Algorithm; Word Segmentation Algorithm.

Notes

Cited By

Quotes

Abstract

The success of pretrained transformer language models (LMs) in natural language processing has led to a wide range of pretraining setups. In particular, these models employ a variety of subword tokenization methods, most notably byte-pair encoding (BPE) (Sennrich et al., 2016; Gage, 1994), the WordPiece method (Schuster and Nakajima, 2012), and unigram language modeling (Kudo, 2018), to segment text. However, to the best of our knowledge, the literature does not contain a direct evaluation of the impact of tokenization on language model pretraining. We analyze differences between BPE and unigram LM tokenization, finding that the latter method recovers subword units that align more closely with morphology and avoids problems stemming from BPE'™s greedy construction procedure. We then compare the fine-tuned task performance of identical transformer masked language models pretrained with these tokenizations. Across downstream tasks and two languages (English and Japanese), we find that the unigram LM tokenization method matches or outperforms BPE. We hope that developers of future pretrained LMs will consider adopting the unigram LM method over the more prevalent BPE.

1. Introduction

2. Algorithms

(...)

A BPE vocabulary is constructed as follows:

Algorithm 1 Byte-pair encoding (Sennrich et al., 2016; Gage, 1994)
1: Input: set of strings $D$, target vocab size $k$

2: procedure BPE$\left(D, k\right)$

3:   $V \leftarrow$ all unique characters in $D$ (about 4,000 in English Wikipedia)

4:   while $\vert V \vert < k$ do ⊳ Merge tokens

5:     $t_L, t_R \leftarrow$ Most frequent bigram in $D$

6:     $t_{NEW} \leftarrow t_L + t_R$ ⊳ Make new token

7:     $V \leftarrow V + [t_{NEW}]$

8:      Replace each occurrence of $t_L$, $t_R$ in $D$ with $t_{NEW}$

9:   end while

10:    return $V$

11:    end procedure

 BPE tokenization takes the vocabulary $V$ containing ordered merges and applies them to new text in the same order as they occurred during vocabulary construction.

(...)

3. Comparison of Segmentations

4. Downstream Task Experiments

5. Conclusion

Acknowledgments

References

2018

2016

2012

1994

BibTeX

@inproceedings{2020_BytePairEncodingisSuboptimalfor,
  author    = {Kaj Bostrom and
               Greg Durrett},
  editor    = {Trevor Cohn and
               Yulan He and
              [[Yang Liu]]},
  title     = {Byte Pair Encoding is Suboptimal for Language Model Pretraining},
  booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural
               Language Processing (EMNLP 2020)},
  pages     = {4617--4624},
  publisher = {Association for Computational Linguistics},
  year      = {2020},
  url       = {https://doi.org/10.18653/v1/2020.findings-emnlp.414},
  doi       = {10.18653/v1/2020.findings-emnlp.414},
}


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2020 BytePairEncodingisSuboptimalforKaj Bostrom
Greg Durrett
Byte Pair Encoding is Suboptimal for Language Model Pretraining2020