Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
REPOCOD_Unified / README.md
jiang719's picture
Update README.md
f26d37d verified
metadata
dataset_info:
  - config_name: '1.0'
    features:
      - name: instance_id
        dtype: string
      - name: version
        dtype: string
      - name: gold_patches
        struct:
          - name: code
            dtype: string
          - name: test
            dtype: 'null'
      - name: test_patch
        dtype: 'null'
      - name: pre_patches
        struct:
          - name: code
            dtype: string
          - name: test
            dtype: 'null'
      - name: pre_scripts
        dtype: 'null'
      - name: repo
        dtype: string
      - name: base_commit
        dtype: string
      - name: base_commit_timestamp
        dtype: string
      - name: hints_text
        dtype: 'null'
      - name: created_at
        dtype: 'null'
      - name: problem_statement
        struct:
          - name: code
            dtype: string
          - name: test
            dtype: 'null'
      - name: environment_setup_commit
        dtype: string
      - name: evaluation
        struct:
          - name: FAIL_TO_PASS
            sequence: string
          - name: PASS_TO_PASS
            dtype: 'null'
    splits:
      - name: test
        num_bytes: 36354296
        num_examples: 980
    download_size: 6132695
    dataset_size: 36354296
  - config_name: '1.1'
    features:
      - name: instance_id
        dtype: string
      - name: version
        dtype: string
      - name: gold_patches
        struct:
          - name: code
            dtype: string
          - name: test
            dtype: 'null'
      - name: test_patch
        dtype: 'null'
      - name: pre_patches
        struct:
          - name: code
            dtype: string
          - name: test
            dtype: 'null'
      - name: pre_scripts
        dtype: 'null'
      - name: repo
        dtype: string
      - name: base_commit
        dtype: string
      - name: base_commit_timestamp
        dtype: string
      - name: hints_text
        dtype: 'null'
      - name: created_at
        dtype: 'null'
      - name: problem_statement
        struct:
          - name: code
            dtype: string
          - name: test
            dtype: 'null'
      - name: environment_setup_commit
        dtype: string
      - name: evaluation
        struct:
          - name: FAIL_TO_PASS
            sequence: string
          - name: PASS_TO_PASS
            dtype: 'null'
    splits:
      - name: test
        num_bytes: 36354125
        num_examples: 980
    download_size: 6139991
    dataset_size: 36354125
  - config_name: default
    features:
      - name: instance_id
        dtype: string
      - name: version
        dtype: string
      - name: gold_patches
        struct:
          - name: code
            dtype: string
          - name: test
            dtype: 'null'
      - name: test_patch
        dtype: 'null'
      - name: pre_patches
        struct:
          - name: code
            dtype: string
          - name: test
            dtype: 'null'
      - name: pre_scripts
        dtype: 'null'
      - name: repo
        dtype: string
      - name: base_commit
        dtype: string
      - name: base_commit_timestamp
        dtype: string
      - name: hints_text
        dtype: 'null'
      - name: created_at
        dtype: 'null'
      - name: problem_statement
        struct:
          - name: code
            dtype: string
          - name: test
            dtype: 'null'
      - name: environment_setup_commit
        dtype: string
      - name: evaluation
        struct:
          - name: FAIL_TO_PASS
            sequence: string
          - name: PASS_TO_PASS
            dtype: 'null'
    splits:
      - name: test
        num_bytes: 36343472
        num_examples: 980
    download_size: 6132344
    dataset_size: 36343472
configs:
  - config_name: '1.0'
    data_files:
      - split: test
        path: 1.0/test-*
  - config_name: '1.1'
    data_files:
      - split: test
        path: 1.1/test-*

Can Language Models Replace Programmers? REPOCOD Says 'Not Yet'

Large language models (LLMs) have achieved high accuracy, i.e., more than 90 pass@1, in solving Python coding problems in HumanEval and MBPP. Thus, a natural question is, whether LLMs achieve comparable code completion performance compared to human developers? Unfortunately, one cannot answer this question using existing manual crafted or simple (e.g., single-line) code generation benchmarks, since such tasks fail to represent real-world software development tasks. In addition, existing benchmarks often use poor code correctness metrics, providing misleading conclusions.

To address these challenges, we create REPOCOD, a code generation benchmark with 980 problems collected from 11 popular real-world projects, with more than 58% of them requiring file-level or repository-level context information. In addition, REPOCOD has the longest average canonical solution length (331.6 tokens) and the highest average cyclomatic complexity (9.00) compared to existing benchmarks. Each task in REPOCOD includes 313.5 developer-written test cases on average for better correctness evaluation. In our evaluations on ten LLMs, none of the models achieves more than 30 pass@1 on REPOCOD, disclosing the necessity of building stronger LLMs that can help developers in real-world software development.

REPOCOD_Unified is a variation of REPOCOD that has a similar format as SWE-Bench for easier integration into the established inference pipelines.

  • For more details on data collection and evaluation results, please refer to our arxiv preprint.

  • Examples code for downloading repositories, preparing repository snapshot, and running test cases for evaluation are propived at code

  • Check our Leaderboard for preliminary results using SOTA LLMs with RAG.

"instance_id": Instance ID in REPOCOD
"version": Version of REPOCOD
"gold_patches": {
    "code": Patch file to restore the target code,
    "test": Patch file to restore the relevant tests for the target code
}
"test_patch": None,
"pre_patches": {
    "code": Patch file to remove the target code,
    "test": Patch file to remove the relevant tests for the target code
}
"pre_scripts": None,
"repo": {GitHub User Name}/{Project Name}
"base_commit": base commit
"base_commit_timestamp": time of the base commit
"hints_text": None,
"created_at": None,
"problem_statement": {
    "code": Problem statement for code generation.
    "test": Problem statement for test generation.
}
"environment_setup_commit": base commit
"evaluation": {
    "FAIL_TO_PASS": list of relevant test cases
    "PASS_TO_PASS": None, (all remaining tests that passes, we choose not to run the PASS_TO_PASS tests to avoid the computational cost)
}

Citation

@inproceedings{liang2025repocod,
    title = {Can Language Models Replace Programmers for Coding? {REPOCOD} Says `Not Yet'},
    author = {Liang, Shanchao and Jiang, Nan and Hu, Yiran and Tan, Lin},
    editor = {Che, Wanxiang and Nabende, Joyce and Shutova, Ekaterina and Pilehvar, Mohammad Taher},
    booktitle = {Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
    month = {jul},
    year = {2025},
    address = {Vienna, Austria},
    publisher = {Association for Computational Linguistics},
    url = {https://aclanthology.org/2025.acl-long.1204/},
    doi = {10.18653/v1/2025.acl-long.1204},
    pages = {24698--24717},
    ISBN = {979-8-89176-251-0},
}