metadata
language:
- en
license: mit
size_categories:
- n<100
task_categories:
- text-generation
dataset_info:
features:
- name: instance_id
dtype: string
- name: patch
dtype: string
- name: test_patch
dtype: string
- name: FAIL_TO_PASS
list: string
- name: PASS_TO_PASS
list: string
- name: image_name
dtype: string
- name: repo
dtype: string
- name: base_commit
dtype: string
- name: problem_statement
dtype: string
- name: repo_settings
dtype: string
splits:
- name: level1
num_bytes: 68545
num_examples: 1
- name: level2
num_bytes: 60238
num_examples: 1
download_size: 60937
dataset_size: 128783
configs:
- config_name: default
data_files:
- split: level1
path: data/level1-*
- split: level2
path: data/level2-*
tags:
- code
- agents
- software-engineering
ACE-Bench: Agent Coding Evaluation Benchmark
Dataset Description
ACE-Bench is a comprehensive benchmark designed to evaluate AI agents' capabilities in end-to-end feature-level code generation. Unlike traditional benchmarks that focus on function-level or algorithm-specific tasks, ACE-Bench challenges agents to implement complete features within real-world software projects.
Key Characteristics
- Feature-Level Tasks: Each task requires implementing a complete feature, including multiple functions, classes, and their interactions
- Real-World Codebases: Tasks are derived from actual open-source projects, preserving the complexity and context of production code
- End-to-End Evaluation: Agents must understand requirements, generate code, and pass comprehensive test suites
- Two Difficulty Levels:
- Level 1: Agents receive masked code with interface signatures and must implement the complete functionality
- Level 2: Agents receive only test files and must implement both the interface and functionality from scratch
Dataset Statistics
- Total Instances: 2
- Level 1 Instances: 1
- Level 2 Instances: 1
- Total Size: 125.76 KB
- Download Size: 59.51 KB
Dataset Structure
Each instance in ACE-Bench contains:
instance_id: Unique identifier for the taskpatch: Git diff showing the implementation (Level 1) or empty string (Level 2)test_patch: Git diff showing test file modificationsFAIL_TO_PASS: List of test files that must pass after implementationPASS_TO_PASS: List of test files that must continue passing (Level 1 only)image_name: Docker image containing the development environmentrepo: Source repository (e.g., "owner/repo-name")base_commit: Git commit hash of the base versionproblem_statement: Detailed task description and requirementsrepo_settings: Repository configuration settings as JSON string (from python.py)
Usage
import json
from datasets import load_dataset
# Load Level 1 tasks
dataset_lv1 = load_dataset("BamChil/ACE-Bench", split="level1")
# Load Level 2 tasks
dataset_lv2 = load_dataset("BamChil/ACE-Bench", split="level2")
# Example: Access a task
task = dataset_lv1[0]
print(task['instance_id'])
print(task['problem_statement'])
# Parse repo_settings from JSON string
repo_settings = json.loads(task['repo_settings'])
print(repo_settings['repository'])
print(repo_settings['base_image'])