Base Pretrained Language Model (LM)

From GM-RKB
Jump to navigation Jump to search

A Base Pretrained Language Model (LM) is a Language Model that is pre-trained on a large corpus of text data without being fine-tuned for any specific downstream task.



References

2019

2019

  • (Liu, Ott et al., 2019) ⇒ Yinhan Liu, Myle Ott, Naman Goyal, et al. (2019). “RoBERTa: A Robustly Optimized BERT Pretraining Approach.” In: arXiv.
    • QUOTE: “RoBERTa, an example of a base pretrained language model, demonstrates improved performance on several benchmarks, showing the effectiveness of robust optimization in pretraining.”