Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
File size: 6,882 Bytes
fe196da
 
2d80a5e
fe196da
 
 
 
 
c34d558
 
 
 
 
 
fe196da
 
be71f67
 
 
 
 
 
fe196da
 
 
 
 
 
 
 
 
 
 
 
 
c34d558
 
 
 
bf77eea
fe196da
 
c34d558
 
 
 
 
 
fe196da
 
085324e
c830383
085324e
 
f0baac8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2d80a5e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fe196da
 
 
 
 
f0baac8
 
 
 
fe196da
f26d37d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
---
dataset_info:
- config_name: '1.0'
  features:
  - name: instance_id
    dtype: string
  - name: version
    dtype: string
  - name: gold_patches
    struct:
    - name: code
      dtype: string
    - name: test
      dtype: 'null'
  - name: test_patch
    dtype: 'null'
  - name: pre_patches
    struct:
    - name: code
      dtype: string
    - name: test
      dtype: 'null'
  - name: pre_scripts
    dtype: 'null'
  - name: repo
    dtype: string
  - name: base_commit
    dtype: string
  - name: base_commit_timestamp
    dtype: string
  - name: hints_text
    dtype: 'null'
  - name: created_at
    dtype: 'null'
  - name: problem_statement
    struct:
    - name: code
      dtype: string
    - name: test
      dtype: 'null'
  - name: environment_setup_commit
    dtype: string
  - name: evaluation
    struct:
    - name: FAIL_TO_PASS
      sequence: string
    - name: PASS_TO_PASS
      dtype: 'null'
  splits:
  - name: test
    num_bytes: 36354296
    num_examples: 980
  download_size: 6132695
  dataset_size: 36354296
- config_name: '1.1'
  features:
  - name: instance_id
    dtype: string
  - name: version
    dtype: string
  - name: gold_patches
    struct:
    - name: code
      dtype: string
    - name: test
      dtype: 'null'
  - name: test_patch
    dtype: 'null'
  - name: pre_patches
    struct:
    - name: code
      dtype: string
    - name: test
      dtype: 'null'
  - name: pre_scripts
    dtype: 'null'
  - name: repo
    dtype: string
  - name: base_commit
    dtype: string
  - name: base_commit_timestamp
    dtype: string
  - name: hints_text
    dtype: 'null'
  - name: created_at
    dtype: 'null'
  - name: problem_statement
    struct:
    - name: code
      dtype: string
    - name: test
      dtype: 'null'
  - name: environment_setup_commit
    dtype: string
  - name: evaluation
    struct:
    - name: FAIL_TO_PASS
      sequence: string
    - name: PASS_TO_PASS
      dtype: 'null'
  splits:
  - name: test
    num_bytes: 36354125
    num_examples: 980
  download_size: 6139991
  dataset_size: 36354125
- config_name: default
  features:
  - name: instance_id
    dtype: string
  - name: version
    dtype: string
  - name: gold_patches
    struct:
    - name: code
      dtype: string
    - name: test
      dtype: 'null'
  - name: test_patch
    dtype: 'null'
  - name: pre_patches
    struct:
    - name: code
      dtype: string
    - name: test
      dtype: 'null'
  - name: pre_scripts
    dtype: 'null'
  - name: repo
    dtype: string
  - name: base_commit
    dtype: string
  - name: base_commit_timestamp
    dtype: string
  - name: hints_text
    dtype: 'null'
  - name: created_at
    dtype: 'null'
  - name: problem_statement
    struct:
    - name: code
      dtype: string
    - name: test
      dtype: 'null'
  - name: environment_setup_commit
    dtype: string
  - name: evaluation
    struct:
    - name: FAIL_TO_PASS
      sequence: string
    - name: PASS_TO_PASS
      dtype: 'null'
  splits:
  - name: test
    num_bytes: 36343472
    num_examples: 980
  download_size: 6132344
  dataset_size: 36343472
configs:
- config_name: '1.0'
  data_files:
  - split: test
    path: 1.0/test-*
- config_name: '1.1'
  data_files:
  - split: test
    path: 1.1/test-*
---

# Can Language Models Replace Programmers? REPOCOD Says 'Not Yet'

Large language models (LLMs) have achieved high accuracy, i.e., more than 90 pass@1, in solving Python coding problems in HumanEval and MBPP. Thus, a natural question is, whether LLMs achieve comparable code completion performance compared to human developers? Unfortunately, one cannot answer this question using existing manual crafted or simple (e.g., single-line) code generation benchmarks, since such tasks fail to represent real-world software development tasks. In addition, existing benchmarks often use poor code correctness metrics, providing misleading conclusions.

To address these challenges, we create REPOCOD, a code generation benchmark with 980 problems collected from 11 popular real-world projects, with more than 58% of them requiring file-level or repository-level context information. In addition, REPOCOD has the longest average canonical solution length (331.6 tokens) and the highest average cyclomatic complexity (9.00) compared to existing benchmarks. Each task in REPOCOD includes 313.5 developer-written test cases on average for better correctness evaluation. In our evaluations on ten LLMs, none of the models achieves more than 30 pass@1 on REPOCOD, disclosing the necessity of building stronger LLMs that can help developers in real-world software development.

REPOCOD_Unified is a variation of REPOCOD that has a similar format as [SWE-Bench](https://www.swebench.com/) for easier integration into the established inference pipelines.

* For more details on data collection and evaluation results, please refer to our arxiv [preprint](https://arxiv.org/abs/2410.21647).

* Examples code for downloading repositories, preparing repository snapshot, and running test cases for evaluation are propived at [code](https://github.com/lt-asset/REPOCOD)

* Check our [Leaderboard](https://lt-asset.github.io/REPOCOD/) for preliminary results using SOTA LLMs with RAG.


```
"instance_id": Instance ID in REPOCOD
"version": Version of REPOCOD
"gold_patches": {
    "code": Patch file to restore the target code,
    "test": Patch file to restore the relevant tests for the target code
}
"test_patch": None,
"pre_patches": {
    "code": Patch file to remove the target code,
    "test": Patch file to remove the relevant tests for the target code
}
"pre_scripts": None,
"repo": {GitHub User Name}/{Project Name}
"base_commit": base commit
"base_commit_timestamp": time of the base commit
"hints_text": None,
"created_at": None,
"problem_statement": {
    "code": Problem statement for code generation.
    "test": Problem statement for test generation.
}
"environment_setup_commit": base commit
"evaluation": {
    "FAIL_TO_PASS": list of relevant test cases
    "PASS_TO_PASS": None, (all remaining tests that passes, we choose not to run the PASS_TO_PASS tests to avoid the computational cost)
}
```

## Citation
```
@inproceedings{liang2025repocod,
    title = {Can Language Models Replace Programmers for Coding? {REPOCOD} Says `Not Yet'},
    author = {Liang, Shanchao and Jiang, Nan and Hu, Yiran and Tan, Lin},
    editor = {Che, Wanxiang and Nabende, Joyce and Shutova, Ekaterina and Pilehvar, Mohammad Taher},
    booktitle = {Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
    month = {jul},
    year = {2025},
    address = {Vienna, Austria},
    publisher = {Association for Computational Linguistics},
    url = {https://aclanthology.org/2025.acl-long.1204/},
    doi = {10.18653/v1/2025.acl-long.1204},
    pages = {24698--24717},
    ISBN = {979-8-89176-251-0},
}
```