File size: 975 Bytes
effd366
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45021e7
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# CodeBench-HumanEval-Curated

A curated subset of code evaluation problems designed for assessing language model coding capabilities.

## Dataset Description

This dataset contains carefully selected programming problems from the original HumanEval benchmark created by OpenAI. We performed quality filtering and added enhanced test cases.

### Source Data

The problems in this dataset are derived from [OpenAI HumanEval](https://github.com/openai/human-eval), a benchmark for evaluating code generation capabilities of language models.

### Data Processing

- Selected top 100 problems based on discriminative power
- Added edge case test coverage
- Standardized docstring formats

## Usage

```python
from datasets import load_dataset
dataset = load_dataset("toolevalxm/CodeBench-HumanEval-Curated")
```

## Citation

If using this dataset, please cite the original HumanEval paper and this curation effort.


**Attribution**

This dataset requires attribution to OpenAI.