Update README.md
Browse files
README.md
CHANGED
|
@@ -142,3 +142,72 @@ configs:
|
|
| 142 |
- split: test
|
| 143 |
path: humaneval-rkt/test-*
|
| 144 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 142 |
- split: test
|
| 143 |
path: humaneval-rkt/test-*
|
| 144 |
---
|
| 145 |
+
|
| 146 |
+
# Dataset Card for MultiPL-E-fixed (OCaml, Lua, R, Racket, Julia)
|
| 147 |
+
|
| 148 |
+
This dataset provides corrections for the **OCaml, Lua, R, Racket, and Julia** portions of the [nuprl/MultiPL-E](https://github.com/nuprl/MultiPL-E) benchmark.
|
| 149 |
+
|
| 150 |
+
### Original Dataset Information
|
| 151 |
+
- **Repository:** https://github.com/nuprl/MultiPL-E
|
| 152 |
+
- **Paper:** https://ieeexplore.ieee.org/abstract/document/10103177
|
| 153 |
+
- **Original Point of Contact:** carolyn.anderson@wellesley.edu, mfeldman@oberlin.edu, a.guha@northeastern.edu
|
| 154 |
+
|
| 155 |
+
### This Version
|
| 156 |
+
- **Repository:** https://github.com/jsbyun121/MultiPL-E-fixed
|
| 157 |
+
---
|
| 158 |
+
|
| 159 |
+
## Dataset Summary
|
| 160 |
+
|
| 161 |
+
MultiPL-E is a large-scale dataset for evaluating code generation models across 22 programming languages. It was created by translating the OpenAI HumanEval and MBPP Python benchmarks into other languages using a compiler-based approach.
|
| 162 |
+
|
| 163 |
+
However, analysis of the dataset revealed several logical errors, inconsistencies, and language-specific issues in the generated prompts and test cases. These issues can lead to inaccurate evaluation scores by unfairly penalizing models for correctly identifying flaws in the prompts.
|
| 164 |
+
|
| 165 |
+
This repository provides a **corrected version** of the dataset specifically for **OCaml, Lua, R, Racket, and Julia**. The goal of this version is to provide a more reliable and accurate benchmark for evaluating Large Language Models on these languages.
|
| 166 |
+
|
| 167 |
+
## Summary of Corrections
|
| 168 |
+
|
| 169 |
+
The following modifications were made to address issues in the original dataset.
|
| 170 |
+
|
| 171 |
+
### 1. Logical Problems in Prompts and Test Cases
|
| 172 |
+
Several problems in the HumanEval portion of the dataset were corrected for the following issues:
|
| 173 |
+
|
| 174 |
+
* **`HumanEval_75_is_multiply_prime`**: Resolved a mismatch between problem instructions and test cases.
|
| 175 |
+
* **`HumanEval_92_any_int`**: Fixed an incorrect test case that did not align with the problem's requirements.
|
| 176 |
+
* **`HumanEval_116_sort_array`**: Corrected a discrepancy between the sorting criteria in the instructions and the test cases.
|
| 177 |
+
* **`HumanEval_128_prod_signs`**: Amended an incorrect example in the prompt's docstring.
|
| 178 |
+
* **`HumanEval_140_fix_spaces`**: Corrected a faulty test case.
|
| 179 |
+
* **`HumanEval_142_sum_squares`**: Repaired corrupted or syntactically incorrect examples.
|
| 180 |
+
* **`HumanEval_145_order_by_points`**: Clarified vague and ambiguous logic in the question to provide a more precise problem statement.
|
| 181 |
+
* **`HumanEval_148_bf`**: Fixed a contradiction between the provided examples and the main instructions.
|
| 182 |
+
* **`HumanEval_151_double_the_difference`**: Replaced an incorrect test case that produced an invalid result.
|
| 183 |
+
* **`HumanEval_162_string_to_md5`**: Addressed unspecified handling for `None`/`null` data types required by the test cases.
|
| 184 |
+
|
| 185 |
+
### 2. General Prompt Ambiguities
|
| 186 |
+
* **0-Based Indexing:** Added clarifications to prompts where array/list index interpretation was ambiguous, explicitly enforcing a 0-based convention to ensure consistent behavior.
|
| 187 |
+
|
| 188 |
+
### 3. Language-Specific Fixes
|
| 189 |
+
* **R:** Corrected issues related to the handling of empty vectors, a common edge case.
|
| 190 |
+
* **OCaml:** Fixed incorrect usage of unary operators to align with OCaml's syntax.
|
| 191 |
+
* **Julia:** Resolved parsing issues caused by the triple-quote (`"""`) docstring character.
|
| 192 |
+
* **Lua & Racket:** `[Add a brief, high-level description of the fixes for these languages here.]`
|
| 193 |
+
|
| 194 |
+
## Using This Dataset
|
| 195 |
+
|
| 196 |
+
This corrected dataset is designed to be a **drop-in replacement** for the official MultiPL-E data for OCaml, Lua, R, Racket, and Julia.
|
| 197 |
+
|
| 198 |
+
To use it, simply replace the original `humaneval-[lang]` files with the corrected versions provided in this repository. The data structure remains compatible with standard evaluation frameworks like the [BigCode Code Generation LM Harness](https://github.com/bigcode-project/bigcode-evaluation-harness).
|
| 199 |
+
|
| 200 |
+
## Citation and Attribution
|
| 201 |
+
|
| 202 |
+
If you use this corrected version of the dataset in your work, we ask that you please cite the original MultiPL-E paper and also acknowledge this repository for the corrections.
|
| 203 |
+
|
| 204 |
+
**Original Paper:**
|
| 205 |
+
```bibtex
|
| 206 |
+
@inproceedings{cassano2023multipl,
|
| 207 |
+
title={MultiPL-E: A Scalable and Extensible Approach to Benchmarking Neural Code Generation},
|
| 208 |
+
author={Cassano, Federico and Gouwar, John and Nguyen, Daniel and Nguyen, Tuan and Phothilimthana, Phitchaya and Pinckney, David and Anderson, Carolyn and Feldman, Michael and Guha, Arjun},
|
| 209 |
+
booktitle={2023 IEEE/ACM 20th International Conference on Mining Software Repositories (MSR)},
|
| 210 |
+
pages={707--719},
|
| 211 |
+
year={2023},
|
| 212 |
+
organization={IEEE}
|
| 213 |
+
}
|