HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena

Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena

Abstract

Evaluating large language model (LLM) based chat assistants is challenging due to their broad capabilities and the inadequacy of existing benchmarks in measuring human preferences. To address this, we explore using strong LLMs as judges to evaluate these models on more open-ended questions. We examine the usage and limitations of LLM-as-a-judge, including position, verbosity, and self-enhancement biases, as well as limited reasoning ability, and propose solutions to mitigate some of them. We then verify the agreement between LLM judges and human preferences by introducing two benchmarks: MT-bench, a multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our results reveal that strong LLM judges like GPT-4 can match both controlled and crowdsourced human preferences well, achieving over 80% agreement, the same level of agreement between humans. Hence, LLM-as-a-judge is a scalable and explainable way to approximate human preferences, which are otherwise very expensive to obtain. Additionally, we show our benchmark and traditional benchmarks complement each other by evaluating several variants of LLaMA and Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with human preferences are publicly available at https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.

Code Repositories

opengvlab/multi-modality-arena
pytorch
Mentioned in GitHub
lm-sys/routellm
pytorch
Mentioned in GitHub
formulamonks/llm-benchmarker-suite
pytorch
Mentioned in GitHub
ojiyumm/mt_bench_rwkv
pytorch
Mentioned in GitHub
lm-sys/fastchat
Official
pytorch
ilyagusev/ping_pong_bench
Mentioned in GitHub
theoremone/llm-benchmarker-suite
pytorch
Mentioned in GitHub
PAIR-code/llm-comparator
tf
Mentioned in GitHub
kuk/rulm-sbs2
Mentioned in GitHub
dongping-chen/mllm-as-a-judge
pytorch
Mentioned in GitHub
bjoernpl/fasteval
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
long-context-understanding-on-ada-levalVicuna-7b-v1.5-16k
12k: 1.9
16k: 1.0
1k: 37.0
2k: 11.1
4k: 5.8
6k: 3.2
8k: 1.8
long-context-understanding-on-ada-levalLongChat-7b-v1.5-32k
12k: 1.6
16k: 0.8
1k: 32.4
2k: 10.7
4k: 5.7
6k: 3.1
8k: 1.9
long-context-understanding-on-ada-levalVicuna-13b-v1.5-16k
12k: 1.4
16k: 0.9
1k: 53.4
2k: 29.2
4k: 13.1
6k: 4.3
8k: 2.2
long-context-understanding-on-ada-leval-tsortLongChat-7b-v1.5-32k
16k: 2.5
2k: 5.3
4k: 5.0
8k: 3.1
long-context-understanding-on-ada-leval-tsortVicuna-7b-v1.5-16k
16k: 1.7
2k: 5.3
4k: 2.2
8k: 2.3
long-context-understanding-on-ada-leval-tsortVicuna-13b-v1.5-16k
16k: 3.1
2k: 5.4
4k: 5.0
8k: 2.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp