2024 DoesChatGPTHaveaMind
- (Goldstein & Levinstein, 2024) ⇒ Simon Goldstein, and Benjamin A. Levinstein. (2024). “Does ChatGPT Have a Mind?.” doi:10.48550/arXiv.2407.11015
Subject Headings: LLM Internal Representation, LLM Mental State, LLM Sentience, LLM Self-Awareness. Symbol Grounding, Stochastic Parrot, LLM Action Dispositions, LLM Behavioral Analysis.
Notes
- The paper examines whether Large Language Models (LLMs) like ChatGPT possess minds by investigating their internal representations and dispositions to act.
- The paper explores various philosophical theories of representation, including informational, causal, structural, and teleosemantic accounts, arguing that LLMs satisfy key conditions proposed by each.
- The paper provides empirical evidence from AI interpretability research, showing that LLMs like Othello-GPT have internal representations of game states and can generalize to new environments.
- The paper addresses three main skeptical challenges to LLM folk psychology: sensory grounding, the "stochastic parrots" argument, and concerns about memorization, arguing that these challenges do not survive philosophical scrutiny.
- The paper concludes that LLMs possess robust internal representations that carry information, are causally effective, and exhibit structural isomorphism with what they represent.
- The paper raises an open question about whether LLMs have robust dispositions to perform actions, which is necessary for them to possess beliefs, desires, and intentions.
- The paper emphasizes the importance of future research in understanding the capabilities and limitations of LLMs, suggesting that existing skeptical challenges deserve more careful development.
Cited By
Quotes
Abstract
This paper examines the question of whether Large Language Models (LLMs) like ChatGPT possess minds, focusing specifically on whether they have a genuine folk psychology encompassing beliefs, desires, and intentions. The paper approaches this question by investigating two key aspects: internal representations and dispositions to act. First, the paper surveys various philosophical theories of representation, including informational, causal, structural, and teleosemantic accounts, arguing that LLMs satisfy key conditions proposed by each. The paper draws on recent interpretability research in machine learning to support these claims. Second, the paper explores whether LLMs exhibit robust dispositions to perform actions, a necessary component of folk psychology. The paper considers two prominent philosophical traditions, interpretationism and representationalism, to assess LLM action dispositions. While the paper finds evidence suggesting LLMs may satisfy some criteria for having a mind, particularly in game-theoretic environments, it concludes that the data remains inconclusive. Additionally, the paper replies to several skeptical challenges to LLM folk psychology, including issues of sensory grounding, the "stochastic parrots" argument, and concerns about memorization. The paper has three main upshots. First, LLMs do have robust internal representations. Second, there is an open question to answer about whether LLMs have robust action dispositions. Third, existing skeptical challenges to LLM representation do not survive philosophical scrutiny.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2024 DoesChatGPTHaveaMind | Simon Goldstein Benjamin A. Levinstein | Does ChatGPT Have a Mind? | 10.48550/arXiv.2407.11015 | 2024 |