Command Palette
Search for a command to run...
Self-play with Execution Feedback: Improving Instruction-following Capabilities of Large Language Models
Guanting Dong Keming Lu Chengpeng Li Tingyu Xia Bowen Yu Chang Zhou Jingren Zhou

Abstract
One core capability of large language models (LLMs) is to follow naturallanguage instructions. However, the issue of automatically constructinghigh-quality training data to enhance the complex instruction-followingabilities of LLMs without manual annotation remains unresolved. In this paper,we introduce AutoIF, the first scalable and reliable method for automaticallygenerating instruction-following training data. AutoIF transforms thevalidation of instruction-following data quality into code verification,requiring LLMs to generate instructions, the corresponding code to check thecorrectness of the instruction responses, and unit test samples to verify thecode's correctness. Then, execution feedback-based rejection sampling cangenerate data for Supervised Fine-Tuning (SFT) and Reinforcement Learning fromHuman Feedback (RLHF) training. AutoIF achieves significant improvements acrossthree training algorithms, SFT, Offline DPO, and Online DPO, when applied tothe top open-source LLMs, Qwen2 and LLaMA3, in self-alignment andstrong-to-weak distillation settings. Our code is publicly available athttps://github.com/QwenLM/AutoIF.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| instruction-following-on-ifeval | AutoIF (Llama3 70B) | Inst-level loose-accuracy: 90.4 Inst-level strict-accuracy: 86.7 Prompt-level loose-accuracy: 85.6 Prompt-level strict-accuracy: 80.2 |
| instruction-following-on-ifeval | AutoIF (Qwen2 72B) | Inst-level loose-accuracy: 88 Inst-level strict-accuracy: 86.1 Prompt-level loose-accuracy: 82.3 Prompt-level strict-accuracy: 80.2 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.