2020 BytePairEncodingisSuboptimalfor
- (Bostrom & Durrett, 2020) ⇒ Kaj Bostrom, and Greg Durrett. (2020). “Byte Pair Encoding is Suboptimal for Language Model Pretraining.” In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020).
Subject Headings: Byte-Pair Encoding (BPE) Algorithm; Subword Tokenization Algorithm; Word Segmentation Algorithm.
Notes
Cited By
- Google Scholar: ~ 17 Citations, Retrieved:2021-05-02.
Quotes
Abstract
The success of pretrained transformer language models (LMs) in natural language processing has led to a wide range of pretraining setups. In particular, these models employ a variety of subword tokenization methods, most notably byte-pair encoding (BPE) (Sennrich et al., 2016; Gage, 1994), the WordPiece method (Schuster and Nakajima, 2012), and unigram language modeling (Kudo, 2018), to segment text. However, to the best of our knowledge, the literature does not contain a direct evaluation of the impact of tokenization on language model pretraining. We analyze differences between BPE and unigram LM tokenization, finding that the latter method recovers subword units that align more closely with morphology and avoids problems stemming from BPE's greedy construction procedure. We then compare the fine-tuned task performance of identical transformer masked language models pretrained with these tokenizations. Across downstream tasks and two languages (English and Japanese), we find that the unigram LM tokenization method matches or outperforms BPE. We hope that developers of future pretrained LMs will consider adopting the unigram LM method over the more prevalent BPE.
1. Introduction
2. Algorithms
(...)
A BPE vocabulary is constructed as follows:
Algorithm 1 Byte-pair encoding (Sennrich et al., 2016; Gage, 1994) |
1: Input: set of strings $D$, target vocab size $k$ 2: procedure BPE$\left(D, k\right)$ 3: $V \leftarrow$ all unique characters in $D$ (about 4,000 in English Wikipedia) 4: while $\vert V \vert < k$ do ⊳ Merge tokens 5: $t_L, t_R \leftarrow$ Most frequent bigram in $D$ 6: $t_{NEW} \leftarrow t_L + t_R$ ⊳ Make new token 7: $V \leftarrow V + [t_{NEW}]$ 8: Replace each occurrence of $t_L$, $t_R$ in $D$ with $t_{NEW}$ 9: end while 10: return $V$ 11: end procedure |
BPE tokenization takes the vocabulary $V$ containing ordered merges and applies them to new text in the same order as they occurred during vocabulary construction.
(...)
3. Comparison of Segmentations
4. Downstream Task Experiments
5. Conclusion
Acknowledgments
References
2018
- (Kudo, 2018) ⇒ Taku Kudo. (2018). “Subword Regularization:Improving Neural Network Translation Models with Multiple Subword Candidates". In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018) Volume 1: Long Papers. DOI:10.18653/v1/P18-1007.
2016
- (Sennrich et al., 2016) ⇒ Rico Sennrich, Barry Haddow, and Alexandra Birch. (2016). “Neural Machine Translation of Rare Words with Subword Units.” In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL-2016).
2012
- (Schuster & Nakajima, 2012) ⇒ Mike Schuster, and Kaisuke Nakajima (2012). "Japanese and Korean Voice Search". In: Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2012).
1994
- (Gage, 1994) ⇒ Philip Gage (1994). "A new algorithm for data compression". C Users Journal, 12(2):23–38.
BibTeX
@inproceedings{2020_BytePairEncodingisSuboptimalfor, author = {Kaj Bostrom and Greg Durrett}, editor = {Trevor Cohn and Yulan He and [[Yang Liu]]}, title = {Byte Pair Encoding is Suboptimal for Language Model Pretraining}, booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)}, pages = {4617--4624}, publisher = {Association for Computational Linguistics}, year = {2020}, url = {https://doi.org/10.18653/v1/2020.findings-emnlp.414}, doi = {10.18653/v1/2020.findings-emnlp.414}, }
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2020 BytePairEncodingisSuboptimalfor | Kaj Bostrom Greg Durrett | Byte Pair Encoding is Suboptimal for Language Model Pretraining | 2020 |