2024 GenerativeAgentSimulationsof1000
- (Park, Zou et al., 2024) ⇒ Joon Sung Park, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, and Michael S. Bernstein. (2024). “Generative Agent Simulations of 1,000 People.” doi:10.48550/arXiv.2411.10109
Subject Headings: Generative Agent Architecture, Social Science Simulation, Reflection Module.
Notes
- This paper introduces a novel Generative_Agent_Architecture that simulates the attitudes and behaviors of 1,052 real individuals by applying Large_Language_Models (LLMs) to in-depth interviews.
- This paper demonstrates an 85% accuracy rate in replicating responses on the General_Social_Survey (GSS) and comparable performance in predicting Big_Five_Personality_Traits and outcomes in behavioral experiments.
- This paper highlights that data significantly reduces prediction biases across demographic groups compared to traditional demographic-based models.
- This paper employs a scalable AI_Interviewer designed with a semi-structured protocol to dynamically generate personalized follow-up questions during two-hour participant interviews.
- This paper evaluates agents across four canonical social_science_measures: General_Social_Survey (GSS), Big_Five_Personality_Traits, Behavioral_Economic_Games, and replication of treatment effects.
- This paper finds that even with 80% of interview content removed, interview-based agents outperform those relying on demographic_descriptions or persona_descriptions.
- This paper proposes an Agent Bank with open and restricted API_Access to support research on social_science_simulations and as benchmarks for machine learning.
- This paper uses Reflection_Modules and expert personas (e.g., psychologist, behavioral_economist) to derive insights from interview transcripts for improved agent predictions.
- This paper addresses ethical concerns of Generative_Agents by emphasizing the need for privacy_protection and governance frameworks to mitigate risks.
- This paper lays a foundation for interdisciplinary research combining social_science_simulations, machine_learning, and policy_analysis to explore and predict human behavior.
Cited By
Quotes
Abstract
The promise of human behavioral simulation--general-purpose computational agents that replicate human behavior across domains--could enable broad applications in policymaking and social science. We present a novel agent architecture that simulates the attitudes and behaviors of 1,052 real individuals--applying large language models to qualitative interviews about their lives, then measuring how well these agents replicate the attitudes and behaviors of the individuals that they represent. The generative agents replicate participants' responses on the General Social Survey 85% as accurately as participants replicate their own answers two weeks later, and perform comparably in predicting personality traits and outcomes in experimental replications. Our architecture reduces accuracy biases across racial and ideological groups compared to agents given demographic descriptions. This work provides a foundation for new tools that can help investigate individual and collective behavior.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2024 GenerativeAgentSimulationsof1000 | Percy Liang Meredith Ringel Morris Michael S. Bernstein Joon Sung Park Carolyn Q. Zou Aaron Shaw Benjamin Mako Hill Carrie Cai Robb Willer | Generative Agent Simulations of 1,000 People | 10.48550/arXiv.2411.10109 | 2024 |