2024 GSMSymbolicUnderstandingtheLimi
- (Mirzadeh et al., 2024) ⇒ Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, and Mehrdad Farajtabar. (2024). “GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models.” In: arXiv preprint arXiv:2410.05229.
Subject Headings: GSM-Symbolic, GSM8K.
Notes
- The paper introduces GSM-Symbolic, a new benchmark for evaluating mathematical reasoning in LLMs, addressing limitations of the GSM8K benchmark.
- The paper reveals significant performance variability in LLMs when solving different instantiations of the same mathematical question.
- The paper demonstrates that LLM performance drops when only numerical values in questions are changed, highlighting sensitivity to numerical inputs.
- The paper shows that LLM performance deteriorates as the number of clauses in a question increases, suggesting limitations in handling complex problems.
- The paper proposes that LLMs may not perform genuine logical reasoning but instead replicate reasoning steps from training data.
- The paper finds that adding irrelevant clauses to questions causes significant performance drops across all tested models, up to 65%.
- The paper conducts a large-scale study on both open and closed state-of-the-art language models.
- The paper uses symbolic templates to generate diverse sets of questions, enabling more controllable evaluations.
- The paper questions the reliability of current metrics used to assess mathematical reasoning in LLMs.
- The paper provides insights into the fragility of mathematical reasoning capabilities in current LLMs.
- The paper calls for further research to develop AI models capable of genuine formal reasoning beyond pattern matching.
Cited By
Quotes
Abstract
Recent advancements in Large Language Models (LLMs) have sparked interest in their formal reasoning capabilities, particularly in mathematics. The GSM8K benchmark is widely used to assess the mathematical reasoning of models on grade school-level questions. While the performance of LLMs on GSM8K has significantly improved in recent years, it remains unclear whether their mathematical reasoning capabilities have genuinely advanced, raising questions about the reliability of the reported metrics. To address these concerns, we conduct a large-scale study on several SOTA open and closed models. To overcome the limitations of existing evaluations, we introduce GSM-Symbolic, an improved benchmark created from symbolic templates that allow for the generation of a diverse set of questions. GSM-Symbolic enables more controllable evaluations, providing key insights and more reliable metrics for measuring the reasoning capabilities of this http URL. Our findings reveal that LLMs exhibit noticeable variance when responding to different instantiations of the same question. Specifically, the performance of all models declines when only the numerical values in the question are altered in the GSM-Symbolic benchmark. Furthermore, we investigate the fragility of mathematical reasoning in these models and show that their performance significantly deteriorates as the number of clauses in a question increases. We hypothesize that this decline is because current LLMs cannot perform genuine logical reasoning; they replicate reasoning steps from their training data. Adding a single clause that seems relevant to the question causes significant performance drops (up to 65%) across all state-of-the-art models, even though the clause doesn't contribute to the reasoning chain needed for the final answer. Overall, our work offers a more nuanced understanding of LLMs' capabilities and limitations in mathematical reasoning.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2024 GSMSymbolicUnderstandingtheLimi | Samy Bengio Mehrdad Farajtabar Iman Mirzadeh Keivan Alizadeh Hooman Shahrokhi Oncel Tuzel | GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models | 2024 |