2023 ToolformerLanguageModelsCanTeac
- (Schick et al., 2023) ⇒ Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. (2023). “Toolformer: Language Models Can Teach Themselves to Use Tools.” In: arXiv preprint arXiv:2302.04761. doi:10.48550/arXiv.2302.04761
Subject Headings: Toolformer LLM, Software Generation LLM.
Notes
Cited By
2023
- (Patil et al., 2023) ⇒ Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. (2023). “Gorilla: Large Language Model Connected with Massive APIs.” DOI:10.48550/arXiv.2305.15334
Quotes
Abstract
Language models (LMs) exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale. They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller models excel. In this paper, we show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds. We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction. This is done in a self-supervised way, requiring nothing more than a handful of demonstrations for each API. We incorporate a range of tools, including a calculator, a Q\&A system, two different search engines, a translation system, and a calendar. Toolformer achieves substantially improved zero-shot performance across a variety of downstream tasks, often competitive with much larger models, without sacrificing its core language modeling abilities.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2023 ToolformerLanguageModelsCanTeac | Luke Zettlemoyer Timo Schick Thomas Scialom Jane Dwivedi-Yu Roberta Raileanu Maria Lomeli Nicola Cancedda Roberto Dessì | Toolformer: Language Models Can Teach Themselves to Use Tools | 10.48550/arXiv.2302.04761 | 2023 |