The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

CodeBench-HumanEval-Curated

A curated subset of code evaluation problems designed for assessing language model coding capabilities.

Dataset Description

This dataset contains carefully selected programming problems from the original HumanEval benchmark created by OpenAI. We performed quality filtering and added enhanced test cases.

Source Data

The problems in this dataset are derived from OpenAI HumanEval, a benchmark for evaluating code generation capabilities of language models.

Data Processing

  • Selected top 100 problems based on discriminative power
  • Added edge case test coverage
  • Standardized docstring formats

Usage

from datasets import load_dataset
dataset = load_dataset("toolevalxm/CodeBench-HumanEval-Curated")

Citation

If using this dataset, please cite the original HumanEval paper and this curation effort.

Attribution

This dataset requires attribution to OpenAI.

Downloads last month
7