HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Ask Me Anything: A simple strategy for prompting language models

Simran Arora Avanika Narayan Mayee F. Chen Laurel Orr Neel Guha Kush Bhatia Ines Chami Frederic Sala Christopher Ré

Ask Me Anything: A simple strategy for prompting language models

Abstract

Large language models (LLMs) transfer well to new tasks out-of-the-box simply given a natural language prompt that demonstrates how to perform the task and no additional training. Prompting is a brittle process wherein small modifications to the prompt can cause large variations in the model predictions, and therefore significant effort is dedicated towards designing a painstakingly "perfect prompt" for a task. To mitigate the high degree of effort involved in prompt-design, we instead ask whether producing multiple effective, yet imperfect, prompts and aggregating them can lead to a high quality prompting strategy. Our observations motivate our proposed prompting method, ASK ME ANYTHING (AMA). We first develop an understanding of the effective prompt formats, finding that question-answering (QA) prompts, which encourage open-ended generation ("Who went to the park?") tend to outperform those that restrict the model outputs ("John went to the park. Output True or False."). Our approach recursively uses the LLM itself to transform task inputs to the effective QA format. We apply the collected prompts to obtain several noisy votes for the input's true label. We find that the prompts can have very different accuracies and complex dependencies and thus propose to use weak supervision, a procedure for combining the noisy predictions, to produce the final predictions for the inputs. We evaluate AMA across open-source model families (e.g., EleutherAI, BLOOM, OPT, and T0) and model sizes (125M-175B parameters), demonstrating an average performance lift of 10.2% over the few-shot baseline. This simple strategy enables the open-source GPT-J-6B model to match and exceed the performance of few-shot GPT3-175B on 15 of 20 popular benchmarks. Averaged across these tasks, the GPT-J-6B model outperforms few-shot GPT3-175B. We release our code here: https://github.com/HazyResearch/ama_prompting

Code Repositories

hazyresearch/ama_prompting
Official
Mentioned in GitHub
simran-arora/focus
pytorch
Mentioned in GitHub
simran-arora/privacy_fm
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
coreference-resolution-on-winograd-schemaNeo-6B (few-shot)
Accuracy: 36.5
coreference-resolution-on-winograd-schemaNeo-6B (QA)
Accuracy: 74.7
coreference-resolution-on-winograd-schemaNeo-6B (QA + WS)
Accuracy: 77.9
natural-language-inference-on-rteNeo-6B (QA + WS)
Accuracy: 75.1%
natural-language-inference-on-rteNeo-6B (few-shot)
Accuracy: 58.8%
natural-language-inference-on-rteNeo-6B (QA)
Accuracy: 61.7%
question-answering-on-boolqNeo-6B (QA)
Accuracy: 64.9
question-answering-on-boolqNeo-6B (few-shot)
Accuracy: 66.5
question-answering-on-boolqNeo-6B (QA + WS)
Accuracy: 67.2
question-answering-on-copaNeo-6B (few-shot)
Accuracy: 77.0
question-answering-on-copaNeo-6B (QA)
Accuracy: 58.2
question-answering-on-copaNeo-6B (QA + WS)
Accuracy: 84.0
question-answering-on-multircNeo-6B (QA)
F1: 58.8
question-answering-on-multircNeo-6B (few-shot)
F1: 60.8
question-answering-on-multircNeo-6B (QA + WS)
F1: 63.8
question-answering-on-natural-questionsNeo-6B (QA + WS)
EM: 19.6
question-answering-on-natural-questionsNeo-6B (Few-Shot)
EM: 13.7
question-answering-on-natural-questionsNeo-6B (QA)
EM: 19.7
question-answering-on-story-clozeNeo-6B (QA)
Accuracy: 76.3
question-answering-on-story-clozeNeo-6B (QA + WS)
Accuracy: 87.8
question-answering-on-story-clozeNeo-6B (few-shot)
Accuracy: 51.0

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp