HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Large Language Models Encode Clinical Knowledge

Large Language Models Encode Clinical Knowledge

Abstract

Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.

Code Repositories

dmis-lab/olaph
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
multiple-choice-question-answering-mcqa-on-21Flan-PaLM (8B, Few-shot)
Dev Set (Acc-%): 0.345
multiple-choice-question-answering-mcqa-on-21PaLM (540B, Few-shot)
Dev Set (Acc-%): 0.545
multiple-choice-question-answering-mcqa-on-21Flan-PaLM (540B, CoT)
Dev Set (Acc-%): 0.536
multiple-choice-question-answering-mcqa-on-21Flan-PaLM (540B, Few-shot)
Dev Set (Acc-%): 0.565
multiple-choice-question-answering-mcqa-on-21PaLM (8B, Few-shot)
Dev Set (Acc-%): 0.267
multiple-choice-question-answering-mcqa-on-21Flan-PaLM (540B, SC)
Dev Set (Acc-%): 0.576
multiple-choice-question-answering-mcqa-on-21PaLM (62B, Few-shot)
Dev Set (Acc-%): 0.434
multiple-choice-question-answering-mcqa-on-21Flan-PaLM (62B, Few-shot)
Dev Set (Acc-%): 0.462
question-answering-on-medqa-usmleFlan-PaLM (540 B)
Accuracy: 67.6
question-answering-on-medqa-usmleBioLinkBERT (340 M)
Accuracy: 45.1
question-answering-on-medqa-usmlePubMedGPT (2.7 B)
Accuracy: 50.3
question-answering-on-medqa-usmleGPT-Neo (2.7 B)
Accuracy: 33.3
question-answering-on-pubmedqaPaLM (8B, Few-shot)
Accuracy: 34
question-answering-on-pubmedqaPaLM (62B, Few-shot)
Accuracy: 57.8
question-answering-on-pubmedqaFlan-PaLM (540B, Few-shot)
Accuracy: 79
question-answering-on-pubmedqaFlan-PaLM (62B, Few-shot)
Accuracy: 77.2
question-answering-on-pubmedqaPaLM (540B, Few-shot)
Accuracy: 55
question-answering-on-pubmedqaFlan-PaLM (540B, SC)
Accuracy: 75.2
question-answering-on-pubmedqaFlan-PaLM (8B, Few-shot)
Accuracy: 67.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp