| # CodeBench-HumanEval-Curated | |
| A curated subset of code evaluation problems designed for assessing language model coding capabilities. | |
| ## Dataset Description | |
| This dataset contains carefully selected programming problems from the original HumanEval benchmark created by OpenAI. We performed quality filtering and added enhanced test cases. | |
| ### Source Data | |
| The problems in this dataset are derived from [OpenAI HumanEval](https://github.com/openai/human-eval), a benchmark for evaluating code generation capabilities of language models. | |
| ### Data Processing | |
| - Selected top 100 problems based on discriminative power | |
| - Added edge case test coverage | |
| - Standardized docstring formats | |
| ## Usage | |
| ```python | |
| from datasets import load_dataset | |
| dataset = load_dataset("toolevalxm/CodeBench-HumanEval-Curated") | |
| ``` | |
| ## Citation | |
| If using this dataset, please cite the original HumanEval paper and this curation effort. | |
| **Attribution** | |
| This dataset requires attribution to OpenAI. |