Galactica LLM Model
Jump to navigation
Jump to search
A Galactica LLM Model is a scientific-domain LLM that focuses on storing, combining, and reasoning about scientific knowledge.
- Context:
- It can (typically) be a Domain-Specific LLM.
- It can (typically) be Transformer-based LLM.
- It can (typically) be Long-Context LLM (>16K).
- It can (typically) be Open-Source LLM.
- It can (typically) be Multilingual LLM.
- It can (typically) be Decoder-Encoder-based LLM.
- It can (typically) be Current LLM (2022).
- It can (typically) be Finetuned LLM.
- It can (typically) be Scientific AI Application.
- It can (typically) be a Meta AI LLM (in collaboration with Papers with Code).
- ...
- Example(s):
- Galactica LLM, v2022 outperform other language models like Chinchilla and PaLM on scientific benchmarks.
- ...
- Counter-Example(s):
- See: Large Language Model (LLM), Transformer Architecture, Scientific AI Applications
References
2022
- ([Taylor et al., 2022]) ⇒ Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. (2022). "Galactica: A large language model for science.” arXiv preprint arXiv:2211.09085. [Link](https://arxiv.org/abs/2211.09085)
- NOTES:
- The paper introduces Galactica, a large language model designed specifically for scientific knowledge, trained on a high-quality, curated dataset of scientific texts, including over 48 million papers, textbooks, and other scientific resources.
- The paper demonstrates Galactica's enhanced performance over existing language models on a variety of scientific tasks, achieving a 68.2% accuracy on LaTeX equations compared to 49.0% for GPT-3, and surpassing other models on reasoning tasks such as mathematical MMLU and MATH.
- The paper highlights the specialized training corpus for Galactica, which is meticulously curated and includes a wide range of scientific documents and data formats, ensuring a high-quality training process that enhances model performance.
- The paper details the use of task-specific tokens in Galactica to handle different types of scientific knowledge, such as citations, step-by-step reasoning, SMILES formulas for chemicals, and protein sequences, allowing for more precise and context-aware text generation.
- The paper presents state-of-the-art results achieved by Galactica on several downstream tasks, including PubMedQA and MedMCQA, with scores of 77.6% and 52.9%, respectively, demonstrating its effectiveness in scientific question answering.
- The paper describes knowledge probe benchmarks that test Galactica’s ability to recall and utilize scientific knowledge, showing significant improvements over general language models like GPT-3 and Chinchilla.
- The paper explains Galactica's multi-modal capabilities, which enable it to process and understand multi-modal scientific data, such as chemical reactions and protein sequences, allowing it to perform tasks like IUPAC name prediction and drug discovery simulations effectively.
- The paper discusses Galactica's ability to predict citations accurately, outperforming both sparse and dense retrieval methods, showcasing its ability to organize and reference scientific literature accurately.
- The paper emphasizes the advantages of using a curated training dataset over uncurated web crawls, leading to better performance and more reliable outputs in specialized scientific tasks.
- The paper explores potential future applications of Galactica, including its use as an interface for scientific knowledge, capable of synthesizing literature reviews, writing academic papers, and performing complex reasoning tasks autonomously.
- NOTES: