BioGPT Language Model
(Redirected from BioGPT)
Jump to navigation
Jump to search
A BioGPT Language Model is a biomedical domain pre-trained language model specialized for tasks related to the biomedical field.
- Context:
- It is built on the principles of the GPT (Generative Pre-trained Transformer) language model, adapted for the biomedical domain.
- BioGPT is trained on large-scale biomedical literature to understand and generate biomedical text.
- It is used for various natural language processing tasks specific to the biomedical field, such as biomedical term generation, text mining, and relationship extraction.
- It has demonstrated success in generating fluent descriptions for biomedical terms, and in performing end-to-end relation extraction tasks.
- …
- Example(s):
- BioGPT-Large Model, with 1.5B parameters.
- BioGPT-Small Model.
- …
- Counter-Example(s):
- BioBERT, PubMedBERT: built on a BERT model.
- Med-Palm, Med-Palm 2.
- Galactica LLM Model.
- See: Biomedical NLP, Biomedical Literature, End-to-End Relation Extraction, Biomedical Term.
References
2022
- https://github.com/microsoft/BioGPT
- QUOTE: his repository contains the implementation of BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining, by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. ...
... BioGPT-Large model with 1.5B parameters is coming, currently available on PubMedQA task with SOTA performance of 81% accuracy.
- QUOTE: his repository contains the implementation of BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining, by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. ...
2022
- (Luo et al., 2022) ⇒ Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, and Tie-Yan Liu. (2022). “BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining.” In: Briefings in Bioinformatics, 23(6). doi:10.1093/bib/bbac409
- ABSTRACT: Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.