File size: 3,330 Bytes
2b200ec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
416f469
c9291fa
2b200ec
 
c9291fa
416f469
37f999d
c9291fa
416f469
c9291fa
 
2b200ec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
416f469
 
 
c9291fa
 
2b200ec
 
 
 
 
 
 
 
 
 
 
 
 
 
c9291fa
2b200ec
 
 
 
c9291fa
2b200ec
 
 
 
 
 
 
 
 
 
 
 
c9291fa
 
 
 
 
2b200ec
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
language:
- en
license: mit
size_categories:
- n<100
task_categories:
- text-generation
dataset_info:
  features:
  - name: instance_id
    dtype: string
  - name: patch
    dtype: string
  - name: test_patch
    dtype: string
  - name: FAIL_TO_PASS
    list: string
  - name: PASS_TO_PASS
    list: string
  - name: image_name
    dtype: string
  - name: repo
    dtype: string
  - name: base_commit
    dtype: string
  - name: problem_statement
    dtype: string
  - name: repo_settings
    dtype: string
  splits:
  - name: level1
    num_bytes: 68545
    num_examples: 1
  - name: level2
    num_bytes: 60238
    num_examples: 1
  download_size: 60937
  dataset_size: 128783
configs:
- config_name: default
  data_files:
  - split: level1
    path: data/level1-*
  - split: level2
    path: data/level2-*
tags:
- code
- agents
- software-engineering
---

# ACE-Bench: Agent Coding Evaluation Benchmark

## Dataset Description

ACE-Bench is a comprehensive benchmark designed to evaluate AI agents' capabilities in end-to-end feature-level code generation. Unlike traditional benchmarks that focus on function-level or algorithm-specific tasks, ACE-Bench challenges agents to implement complete features within real-world software projects.

### Key Characteristics

- **Feature-Level Tasks**: Each task requires implementing a complete feature, including multiple functions, classes, and their interactions
- **Real-World Codebases**: Tasks are derived from actual open-source projects, preserving the complexity and context of production code
- **End-to-End Evaluation**: Agents must understand requirements, generate code, and pass comprehensive test suites
- **Two Difficulty Levels**:
  - **Level 1**: Agents receive masked code with interface signatures and must implement the complete functionality
  - **Level 2**: Agents receive only test files and must implement both the interface and functionality from scratch

### Dataset Statistics

- **Total Instances**: 2
- **Level 1 Instances**: 1
- **Level 2 Instances**: 1
- **Total Size**: 125.76 KB
- **Download Size**: 59.51 KB

## Dataset Structure

Each instance in ACE-Bench contains:

- `instance_id`: Unique identifier for the task
- `patch`: Git diff showing the implementation (Level 1) or empty string (Level 2)
- `test_patch`: Git diff showing test file modifications
- `FAIL_TO_PASS`: List of test files that must pass after implementation
- `PASS_TO_PASS`: List of test files that must continue passing (Level 1 only)
- `image_name`: Docker image containing the development environment
- `repo`: Source repository (e.g., "owner/repo-name")
- `base_commit`: Git commit hash of the base version
- `problem_statement`: Detailed task description and requirements
- `repo_settings`: Repository configuration settings as JSON string (from python.py)

## Usage

```python
import json
from datasets import load_dataset

# Load Level 1 tasks
dataset_lv1 = load_dataset("BamChil/ACE-Bench", split="level1")

# Load Level 2 tasks
dataset_lv2 = load_dataset("BamChil/ACE-Bench", split="level2")

# Example: Access a task
task = dataset_lv1[0]
print(task['instance_id'])
print(task['problem_statement'])

# Parse repo_settings from JSON string
repo_settings = json.loads(task['repo_settings'])
print(repo_settings['repository'])
print(repo_settings['base_image'])
```