2023 ReSTMeetsReActSelfImprovementfo
- (Aksitov et al., 2023) ⇒ Renat Aksitov, Sobhan Miryoosefi, Zonglin Li, Daliang Li, Sheila Babayan, Kavya Kopparapu, Zachary Fisher, Ruiqi Guo, Sushant Prakash, Pranesh Srinivasan, Manzil Zaheer, Felix Yu, and Sanjiv Kumar. (2023). “ReST Meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent.” doi:10.48550/arXiv.2312.10003
Subject Headings:
Notes
Cited By
Quotes
Abstract
Answering complex natural language questions often necessitates multi-step reasoning and integrating external information. Several systems have combined knowledge retrieval with a large language model (LLM) to answer such questions. These systems, however, suffer from various failure cases, and we cannot directly train them end-to-end to fix such failures, as interaction with external knowledge is non-differentiable. To address these deficiencies, we define a ReAct-style LLM agent with the ability to reason and act upon external knowledge. We further refine the agent through a ReST-like method that iteratively trains on previous trajectories, employing growing-batch reinforcement learning with AI feedback for continuous self-improvement and self-distillation. Starting from a prompted large model and after just two iterations of the algorithm, we can produce a fine-tuned small model that achieves comparable performance on challenging compositional question-answering benchmarks with two orders of magnitude fewer parameters.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2023 ReSTMeetsReActSelfImprovementfo | Sanjiv Kumar Sushant Prakash Renat Aksitov Sobhan Miryoosefi Zonglin Li Daliang Li Sheila Babayan Kavya Kopparapu Zachary Fisher Ruiqi Guo Pranesh Srinivasan Manzil Zaheer Felix Yu | ReST Meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent | 10.48550/arXiv.2312.10003 | 2023 |