Update README.md
Browse files
README.md
CHANGED
|
@@ -158,7 +158,7 @@ This dataset provides corrections for the **OCaml, Lua, R, Racket, and Julia** p
|
|
| 158 |
|
| 159 |
## Dataset Summary
|
| 160 |
|
| 161 |
-
MultiPL-E is a large-scale dataset for evaluating code generation models across 22 programming languages.
|
| 162 |
|
| 163 |
However, analysis of the dataset revealed several logical errors, inconsistencies, and language-specific issues in the generated prompts and test cases. These issues can lead to inaccurate evaluation scores by unfairly penalizing models for correctly identifying flaws in the prompts.
|
| 164 |
|
|
@@ -180,7 +180,7 @@ Several problems in the HumanEval portion of the dataset were corrected for the
|
|
| 180 |
* **`HumanEval_145_order_by_points`**: Clarified vague and ambiguous logic in the question to provide a more precise problem statement.
|
| 181 |
* **`HumanEval_148_bf`**: Fixed a contradiction between the provided examples and the main instructions.
|
| 182 |
* **`HumanEval_151_double_the_difference`**: Replaced an incorrect test case that produced an invalid result.
|
| 183 |
-
* **`HumanEval_162_string_to_md5`**: Addressed
|
| 184 |
|
| 185 |
### 2. General Prompt Ambiguities
|
| 186 |
* **0-Based Indexing:** Added clarifications to prompts where array/list index interpretation was ambiguous, explicitly enforcing a 0-based convention to ensure consistent behavior.
|
|
@@ -189,13 +189,12 @@ Several problems in the HumanEval portion of the dataset were corrected for the
|
|
| 189 |
* **R:** Corrected issues related to the handling of empty vectors, a common edge case.
|
| 190 |
* **OCaml:** Fixed incorrect usage of unary operators to align with OCaml's syntax.
|
| 191 |
* **Julia:** Resolved parsing issues caused by the triple-quote (`"""`) docstring character.
|
| 192 |
-
* **Lua & Racket:** `[Add a brief, high-level description of the fixes for these languages here.]`
|
| 193 |
|
| 194 |
## Using This Dataset
|
| 195 |
|
| 196 |
This corrected dataset is designed to be a **drop-in replacement** for the official MultiPL-E data for OCaml, Lua, R, Racket, and Julia.
|
| 197 |
|
| 198 |
-
To use it, simply replace the original `humaneval-[lang]` files with the corrected versions provided in this repository. The data structure remains compatible with standard evaluation frameworks
|
| 199 |
|
| 200 |
## Citation and Attribution
|
| 201 |
|
|
|
|
| 158 |
|
| 159 |
## Dataset Summary
|
| 160 |
|
| 161 |
+
MultiPL-E is a large-scale dataset for evaluating code generation models across 22 programming languages.
|
| 162 |
|
| 163 |
However, analysis of the dataset revealed several logical errors, inconsistencies, and language-specific issues in the generated prompts and test cases. These issues can lead to inaccurate evaluation scores by unfairly penalizing models for correctly identifying flaws in the prompts.
|
| 164 |
|
|
|
|
| 180 |
* **`HumanEval_145_order_by_points`**: Clarified vague and ambiguous logic in the question to provide a more precise problem statement.
|
| 181 |
* **`HumanEval_148_bf`**: Fixed a contradiction between the provided examples and the main instructions.
|
| 182 |
* **`HumanEval_151_double_the_difference`**: Replaced an incorrect test case that produced an invalid result.
|
| 183 |
+
* **`HumanEval_162_string_to_md5`**: Addressed incorrect handling for language-specific `None`/`null` data types required by the test cases.
|
| 184 |
|
| 185 |
### 2. General Prompt Ambiguities
|
| 186 |
* **0-Based Indexing:** Added clarifications to prompts where array/list index interpretation was ambiguous, explicitly enforcing a 0-based convention to ensure consistent behavior.
|
|
|
|
| 189 |
* **R:** Corrected issues related to the handling of empty vectors, a common edge case.
|
| 190 |
* **OCaml:** Fixed incorrect usage of unary operators to align with OCaml's syntax.
|
| 191 |
* **Julia:** Resolved parsing issues caused by the triple-quote (`"""`) docstring character.
|
|
|
|
| 192 |
|
| 193 |
## Using This Dataset
|
| 194 |
|
| 195 |
This corrected dataset is designed to be a **drop-in replacement** for the official MultiPL-E data for OCaml, Lua, R, Racket, and Julia.
|
| 196 |
|
| 197 |
+
To use it, simply replace the original `humaneval-[lang]` files with the corrected versions provided in this repository. The data structure remains compatible with standard evaluation frameworks.
|
| 198 |
|
| 199 |
## Citation and Attribution
|
| 200 |
|