MultiPL-E-fixed / README.md
jsbyun121's picture
Google Spreadsheet link attached.
a76d0fd verified
metadata
dataset_info:
  - config_name: humaneval-jl
    features:
      - name: name
        dtype: string
      - name: language
        dtype: string
      - name: prompt
        dtype: string
      - name: doctests
        dtype: string
      - name: original
        dtype: string
      - name: prompt_terminology
        dtype: string
      - name: tests
        dtype: string
      - name: stop_tokens
        dtype: string
    splits:
      - name: test
        num_bytes: 167490
        num_examples: 159
    download_size: 66247
    dataset_size: 167490
  - config_name: humaneval-lua
    features:
      - name: name
        dtype: string
      - name: language
        dtype: string
      - name: prompt
        dtype: string
      - name: doctests
        dtype: string
      - name: original
        dtype: string
      - name: prompt_terminology
        dtype: string
      - name: tests
        dtype: string
      - name: stop_tokens
        dtype: string
    splits:
      - name: test
        num_bytes: 184572
        num_examples: 161
    download_size: 66774
    dataset_size: 184572
  - config_name: humaneval-ml
    features:
      - name: name
        dtype: string
      - name: language
        dtype: string
      - name: prompt
        dtype: string
      - name: doctests
        dtype: string
      - name: original
        dtype: string
      - name: prompt_terminology
        dtype: string
      - name: tests
        dtype: string
      - name: stop_tokens
        dtype: string
    splits:
      - name: test
        num_bytes: 170283
        num_examples: 155
    download_size: 65815
    dataset_size: 170283
  - config_name: humaneval-r
    features:
      - name: name
        dtype: string
      - name: language
        dtype: string
      - name: prompt
        dtype: string
      - name: doctests
        dtype: string
      - name: original
        dtype: string
      - name: prompt_terminology
        dtype: string
      - name: tests
        dtype: string
      - name: stop_tokens
        dtype: string
    splits:
      - name: test
        num_bytes: 199744
        num_examples: 161
    download_size: 68771
    dataset_size: 199744
  - config_name: humaneval-rkt
    features:
      - name: name
        dtype: string
      - name: language
        dtype: string
      - name: prompt
        dtype: string
      - name: doctests
        dtype: string
      - name: original
        dtype: string
      - name: prompt_terminology
        dtype: string
      - name: tests
        dtype: string
      - name: stop_tokens
        dtype: string
    splits:
      - name: test
        num_bytes: 196214
        num_examples: 161
    download_size: 67226
    dataset_size: 196214
configs:
  - config_name: humaneval-jl
    data_files:
      - split: test
        path: humaneval-jl/test-*
  - config_name: humaneval-lua
    data_files:
      - split: test
        path: humaneval-lua/test-*
  - config_name: humaneval-ml
    data_files:
      - split: test
        path: humaneval-ml/test-*
  - config_name: humaneval-r
    data_files:
      - split: test
        path: humaneval-r/test-*
  - config_name: humaneval-rkt
    data_files:
      - split: test
        path: humaneval-rkt/test-*

Dataset Card for MultiPL-E-fixed (OCaml, Lua, R, Racket, Julia)

This dataset provides corrections for the OCaml, Lua, R, Racket, and Julia portions of the nuprl/MultiPL-E benchmark.

Original Dataset Information

This Version


Dataset Summary

MultiPL-E is a large-scale dataset for evaluating code generation models across 22 programming languages.

However, analysis of the dataset revealed several logical errors, inconsistencies, and language-specific issues in the generated prompts and test cases. These issues can lead to inaccurate evaluation scores by unfairly penalizing models for correctly identifying flaws in the prompts.

This repository provides a corrected version of the dataset specifically for OCaml, Lua, R, Racket, and Julia. The goal of this version is to provide a more reliable and accurate benchmark for evaluating Large Language Models on these languages.

Summary of Corrections

A detailed table of all corrections (logical problems, prompt ambiguities, and language-specific fixes) is available here:

🔗 Google Sheet of Corrections

Using This Dataset

This corrected dataset is designed to be a drop-in replacement for the official MultiPL-E data for OCaml, Lua, R, Racket, and Julia.

To use it, simply replace the original humaneval-[lang] files with the corrected versions provided in this repository. The data structure remains compatible with standard evaluation frameworks.

Citation and Attribution

If you use this corrected version of the dataset in your work, we ask that you please cite the original MultiPL-E paper and also acknowledge this repository for the corrections.

Original Paper:

@inproceedings{cassano2023multipl,
  title={MultiPL-E: A Scalable and Extensible Approach to Benchmarking Neural Code Generation},
  author={Cassano, Federico and Gouwar, John and Nguyen, Daniel and Nguyen, Tuan and Phothilimthana, Phitchaya and Pinckney, David and Anderson, Carolyn and Feldman, Michael and Guha, Arjun},
  booktitle={2023 IEEE/ACM 20th International Conference on Mining Software Repositories (MSR)},
  pages={707--719},
  year={2023},
  organization={IEEE}
}