HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Turbulence: Systematically and Automatically Testing Instruction-Tuned Large Language Models for Code

Shahin Honarvar Mark van der Wilk Alastair Donaldson

Turbulence: Systematically and Automatically Testing Instruction-Tuned Large Language Models for Code

Abstract

We present a method for systematically evaluating the correctness and robustness of instruction-tuned large language models (LLMs) for code generation via a new benchmark, Turbulence. Turbulence consists of a large set of natural language $\textit{question templates}$, each of which is a programming problem, parameterised so that it can be asked in many different forms. Each question template has an associated $\textit{test oracle}$ that judges whether a code solution returned by an LLM is correct. Thus, from a single question template, it is possible to ask an LLM a $\textit{neighbourhood}$ of very similar programming questions, and assess the correctness of the result returned for each question. This allows gaps in an LLM's code generation abilities to be identified, including $\textit{anomalies}$ where the LLM correctly solves $\textit{almost all}$ questions in a neighbourhood but fails for particular parameter instantiations. We present experiments against five LLMs from OpenAI, Cohere and Meta, each at two temperature configurations. Our findings show that, across the board, Turbulence is able to reveal gaps in LLM reasoning ability. This goes beyond merely highlighting that LLMs sometimes produce wrong code (which is no surprise): by systematically identifying cases where LLMs are able to solve some problems in a neighbourhood but do not manage to generalise to solve the whole neighbourhood, our method is effective at highlighting $\textit{robustness}$ issues. We present data and examples that shed light on the kinds of mistakes that LLMs make when they return incorrect code results.

Code Repositories

shahinhonarvar/turbulence-benchmark
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
code-generation-on-turbulenceGPT-3.5-Turbo
CorrSc: 0.617
code-generation-on-turbulenceCodeLlama:13B-4bit-quantised
CorrSc: 0.327
code-generation-on-turbulenceGPT-4
CorrSc: 0.848
code-generation-on-turbulenceCommand
CorrSc: 0.063
code-generation-on-turbulenceCodeLlama:7B-4bit-quantised
CorrSc: 0.289

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp