BamChil commited on
Commit
2b200ec
·
verified ·
1 Parent(s): a7c4346

Update ACE-Bench dataset

Browse files
Files changed (1) hide show
  1. README.md +135 -0
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ size_categories:
6
+ - n<100
7
+ task_categories:
8
+ - text-generation
9
+ dataset_info:
10
+ features:
11
+ - name: instance_id
12
+ dtype: string
13
+ - name: patch
14
+ dtype: string
15
+ - name: test_patch
16
+ dtype: string
17
+ - name: FAIL_TO_PASS
18
+ list: string
19
+ - name: PASS_TO_PASS
20
+ list: string
21
+ - name: image_name
22
+ dtype: string
23
+ - name: repo
24
+ dtype: string
25
+ - name: base_commit
26
+ dtype: string
27
+ - name: problem_statement
28
+ dtype: string
29
+ splits:
30
+ - name: level1
31
+ num_bytes: 448251
32
+ num_examples: 8
33
+ - name: level2
34
+ num_bytes: 277869
35
+ num_examples: 7
36
+ download_size: 119867
37
+ dataset_size: 726120
38
+ configs:
39
+ - config_name: default
40
+ data_files:
41
+ - split: level1
42
+ path: data/level1-*
43
+ - split: level2
44
+ path: data/level2-*
45
+ tags:
46
+ - code
47
+ - agents
48
+ - software-engineering
49
+ ---
50
+
51
+ # ACE-Bench: Agent Coding Evaluation Benchmark
52
+
53
+ ## Dataset Description
54
+
55
+ ACE-Bench is a comprehensive benchmark designed to evaluate AI agents' capabilities in end-to-end feature-level code generation. Unlike traditional benchmarks that focus on function-level or algorithm-specific tasks, ACE-Bench challenges agents to implement complete features within real-world software projects.
56
+
57
+ ### Key Characteristics
58
+
59
+ - **Feature-Level Tasks**: Each task requires implementing a complete feature, including multiple functions, classes, and their interactions
60
+ - **Real-World Codebases**: Tasks are derived from actual open-source projects, preserving the complexity and context of production code
61
+ - **End-to-End Evaluation**: Agents must understand requirements, generate code, and pass comprehensive test suites
62
+ - **Two Difficulty Levels**:
63
+ - **Level 1**: Agents receive masked code with interface signatures and must implement the complete functionality
64
+ - **Level 2**: Agents receive only test files and must implement both the interface and functionality from scratch
65
+
66
+ ### Dataset Statistics
67
+
68
+ - **Total Instances**: 15
69
+ - **Level 1 Instances**: 8
70
+ - **Level 2 Instances**: 7
71
+ - **Total Size**: 709.10 KB
72
+ - **Download Size**: 117.06 KB
73
+
74
+ ## Dataset Structure
75
+
76
+ Each instance in ACE-Bench contains:
77
+
78
+ - `instance_id`: Unique identifier for the task
79
+ - `patch`: Git diff showing the implementation (Level 1) or empty string (Level 2)
80
+ - `test_patch`: Git diff showing test file modifications
81
+ - `FAIL_TO_PASS`: List of test files that must pass after implementation
82
+ - `PASS_TO_PASS`: List of test files that must continue passing (Level 1 only)
83
+ - `image_name`: Docker image containing the development environment
84
+ - `repo`: Source repository (e.g., "owner/repo-name")
85
+ - `base_commit`: Git commit hash of the base version
86
+ - `problem_statement`: Detailed task description and requirements
87
+
88
+ ## Usage
89
+
90
+ ```python
91
+ from datasets import load_dataset
92
+
93
+ # Load Level 1 tasks
94
+ dataset_lv1 = load_dataset("BamChil/ACE-Bench", split="level1")
95
+
96
+ # Load Level 2 tasks
97
+ dataset_lv2 = load_dataset("BamChil/ACE-Bench", split="level2")
98
+
99
+ # Example: Access a task
100
+ task = dataset_lv1[0]
101
+ print(task['instance_id'])
102
+ print(task['problem_statement'])
103
+ ```
104
+
105
+ ## Evaluation
106
+
107
+ To evaluate an agent on ACE-Bench:
108
+
109
+ 1. **Setup Environment**: Pull the Docker image specified in `image_name`
110
+ 2. **Apply Patches**: Use `git apply` to apply test patches in the container
111
+ 3. **Generate Code**: Have the agent generate code based on `problem_statement`
112
+ 4. **Run Tests**: Execute the test suite specified in `FAIL_TO_PASS`
113
+ 5. **Verify Results**: Ensure all tests pass and no regressions occur in `PASS_TO_PASS` tests
114
+
115
+ ## Citation
116
+
117
+ If you use ACE-Bench in your research, please cite:
118
+
119
+ ```bibtex
120
+ @dataset{ace_bench_2025,
121
+ title={ACE-Bench: Agent Coding Evaluation Benchmark},
122
+ author={ACE-Bench Team},
123
+ year={2025},
124
+ publisher={Hugging Face},
125
+ url={https://huggingface.co/datasets/BamChil/ACE-Bench}
126
+ }
127
+ ```
128
+
129
+ ## License
130
+
131
+ This dataset is released under the MIT License. Individual code snippets retain their original repository licenses.
132
+
133
+ ## Contact
134
+
135
+ For questions or feedback, please open an issue on the [GitHub repository](https://github.com/BamChil/ACE-Bench).