Transformer-based Neural Language Model
(Redirected from Transformer Network-based Language Model)
Jump to navigation
Jump to search
A Transformer-based Neural Language Model is a neural LM that is a transformer-based deep NNet.
- Context:
- It can (typically) be produced by a Transformer-based Neural Language Modeling System (that can solve a Transformer-based Neural Network-based Language Modeling Task).
- It can range from being a Character-Level Transformer-based Neural Network-based LM to being a Word Token-Level Transformer-based Neural Network-based LM.
- It can range from being a Small Transformer-based Language Model to being a Large Transformer-based Language Model.
- It can range from being a Forward Transformer-based Neural Network based Language Model to being a Backward Transformer-based Neural Network based Language Model to being a Bi-Directional Transformer-based Neural Network based Language Model.
- It can range from being an Encoder-Decoder Transformer-based Neural Language Model to being an Encoder-Only Transformer-based Neural Language Model to being a Decoder-Only Transformer-based Neural Language Model.
- It can range from being a Custom Transformer-based LM to being a Pre-Trained Transformer-based LM.
- …
- Example(s):
- a BERT LM.
- an ElMO LM.
- a GPT LM, such as GPT-4.
- a Turing-NLG LM.
- …
- Counter-Example(s):
- See: Neural NLG.
References
2023
- (Greco & Tagarelli, 2023) ⇒ Candida M Greco, and Andrea Tagarelli. (2023). “Bringing Order Into the Realm of Transformer-based Language Models for Artificial Intelligence and Law.” In: Artif. Intell. Law Journal. doi:10.48550/arXiv.2308.05502
- QUOTE: ... Transformer-based language models (TLMs) have widely been recognized to be a cutting-edge technology for the successful development of deep-learning-based solutions to problems and applications that require natural language processing and understanding. ...