2023 JudgingLLMAsaJudgewithMTBenchan
- (Zheng, Chiang et al., 2023) ⇒ Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. (2023). “Judging LLM-as-a-judge with MT-Bench and Chatbot Arena.” In: arXiv preprint arXiv:2306.05685. doi:10.48550/arXiv.2306.05685
Subject Headings: Chatbot Arena.
Notes
- It explores using large language models (LLMs) as judges to evaluate other LLMs and chatbots.
- It introduces two new benchmarks - MT-Bench and Chatbot Arena - for assessing human preferences and alignment.
- It finds GPT-4 can match human preferences with over 80% agreement, similar to human-human agreement.
- It examines potential biases of LLM judges like position, verbosity, and self-enhancement and proposes mitigations.
- It shows LLM judges complement standardized benchmarks by evaluating different aspects of models.
- It argues for a hybrid evaluation approach combining capability benchmarks and preference benchmarks with LLM judges.
- It releases the MT-Bench questions, expert votes, and crowdsourced conversations to enable further research.
- It proposes future directions like developing open LLM judges aligned with humans and enhancing reasoning skills.
- It concludes that LLM-as-a-judge is promising for scalable and automated evaluation of human alignment.
Cited By
Quotes
Abstract
Evaluating large language model (LLM) based chat assistants is challenging due to their broad capabilities and the inadequacy of existing benchmarks in measuring human preferences. To address this, we explore using strong LLMs as judges to evaluate these models on more open-ended questions. We examine the usage and limitations of LLM-as-a-judge, including position, verbosity, and self-enhancement biases, as well as limited reasoning ability, and propose solutions to mitigate some of them. We then verify the agreement between LLM judges and human preferences by introducing two benchmarks: MT-bench, a multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our results reveal that strong LLM judges like GPT-4 can match both controlled and crowdsourced human preferences well, achieving over 80% agreement, the same level of agreement between humans. Hence, LLM-as-a-judge is a scalable and explainable way to approximate human preferences, which are otherwise very expensive to obtain. Additionally, we show our benchmark and traditional benchmarks complement each other by evaluating several variants of LLaMA and Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with human preferences are publicly available at this https URL.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2023 JudgingLLMAsaJudgewithMTBenchan | Eric P. Xing Ion Stoica Hao Zhang Joseph E. Gonzalez Lianmin Zheng Wei-Lin Chiang Ying Sheng Siyuan Zhuang Zhanghao Wu Yonghao Zhuang Zi Lin Zhuohan Li Dacheng Li | Judging LLM-as-a-judge with MT-Bench and Chatbot Arena | 10.48550/arXiv.2306.05685 | 2023 |