HyperAI超神经

Question Answering On Social Iqa

评估指标

Accuracy

评测结果

各个模型在此基准测试上的表现结果

模型名称
Accuracy
Paper TitleRepository
LLaMA 13B (zero-shot)50.4LLaMA: Open and Efficient Foundation Language Models
LLaMA 7B (zero-shot)48.9LLaMA: Open and Efficient Foundation Language Models
UnifiedQA 3B79.8UnifiedQA: Crossing Format Boundaries With a Single QA System
Chinchilla (zero-shot)51.3Training Compute-Optimal Large Language Models
LLaMA 65B (zero-shot)52.3LLaMA: Open and Efficient Foundation Language Models
DeBERTa-Large 304M80.2Two is Better than Many? Binary Classification as an Effective Approach to Multi-Choice Question Answering
RoBERTa-Large 355M (fine-tuned)76.7RoBERTa: A Robustly Optimized BERT Pretraining Approach
DeBERTa-Large 304M (classification-based)79.9Two is Better than Many? Binary Classification as an Effective Approach to Multi-Choice Question Answering
LLaMA-3 8B + MixLoRA78.8MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
CompassMTL 567M with Tailor82.2Task Compass: Scaling Multi-task Pre-training with Task Prefix
LLaMA-3 8B+MoSLoRA (fine-tuned)81.0Mixture-of-Subspaces in Low-Rank Adaptation
Random chance baseline33.3SocialIQA: Commonsense Reasoning about Social Interactions
Gopher (zero-shot)50.6Scaling Language Models: Methods, Analysis & Insights from Training Gopher
BERT-base 110M (fine-tuned)63.1SocialIQA: Commonsense Reasoning about Social Interactions
ExDeBERTa 567M79.6Task Compass: Scaling Multi-task Pre-training with Task Prefix
BERT-large 340M (fine-tuned)64.5SocialIQA: Commonsense Reasoning about Social Interactions
GPT-1 117M (fine-tuned)63SocialIQA: Commonsense Reasoning about Social Interactions
LLaMA-2 13B + MixLoRA82.5MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
LLaMA 33B (zero-shot)50.4LLaMA: Open and Efficient Foundation Language Models
CompassMTL 567M81.7Task Compass: Scaling Multi-task Pre-training with Task Prefix
0 of 24 row(s) selected.