HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

CODESIM: Multi-Agent Code Generation and Problem Solving through Simulation-Driven Planning and Debugging

Md. Ashraful Islam Mohammed Eunus Ali Md Rizwan Parvez

CODESIM: Multi-Agent Code Generation and Problem Solving through
  Simulation-Driven Planning and Debugging

Abstract

Large Language Models (LLMs) have made significant strides in code generationand problem solving. Current approaches employ external tool-based iterativedebuggers that use compiler or other tool-based runtime feedback to refinecoarse programs generated by various methods. However, the effectiveness ofthese approaches heavily relies on the quality of the initial code generation,which remains an open challenge. In this paper, we introduce CodeSim, a novelmulti-agent code generation framework that comprehensively addresses the stagesof program synthesis-planning, coding, and debugging-through a human-likeperception approach. As human verifies their understanding of any algorithmsthrough visual simulation, CodeSim uniquely features a method of planverification and internal debugging through the step-by-step simulation ofinput/output. Extensive experiments across seven challenging competitiveproblem-solving and program synthesis benchmarks demonstrate CodeSim'sremarkable code generation capabilities. Our framework achieves newstate-of-the-art (pass@1) results-(HumanEval 95.1%, MBPP 90.7%, APPS 22%, andCodeContests 29.1%). Furthermore, our method shows potential for even greaterenhancement when cascaded with external debuggers. To facilitate furtherresearch and development in this area, we have open-sourced our framework inthis link (https://kagnlp.github.io/codesim.github.io/).

Code Repositories

kagnlp/CodeGenerator
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
code-generation-on-appsCodeSim (GPT4)
Competition Pass@1: 0.81
Interview Pass@1: 4.21
Introductory Pass@1: 26.04
code-generation-on-codecontestsCodeSim (GPT4)
Test Set pass@1: 29.1
code-generation-on-humanevalCodeSim (GPT-4o and LDB Debugger )
Pass@1: 97.6
code-generation-on-humanevalCodeSim (o3-mini)
Pass@1: 98.8
code-generation-on-humanevalCodeSim (GPT-4o)
Pass@1: 95.1
code-generation-on-mbppCodeSim (GPT4o)
Accuracy: 90.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp