Datasets:

humaneval / README.md
kalikako's picture
Upload humaneval dataset and README
bdbf79b verified
metadata
pretty_name: Humaneval
license: mit
language:
  - en
tags:
  - code-generation
  - python
  - text
task_categories:
  - text-generation

Humaneval

This repository hosts a copy of the widely used Humaneval dataset, a benchmark for evaluating the code generation capabilities and performance of Large Language Models (LLMs).

Humaneval consists of programming tasks designed to evaluate the ability of models to generate working Python code based on textual descriptions. It is frequently used in research papers assessing LLMs' code generation performance, particularly in the context of automated programming.

Contents

  • humaneval.jsonl (or your actual filename): the standard set of programming tasks.

Each entry contains:

{
  "task_id": "...",
  "prompt": "...",
  "canonical_solution": "...",
  "test": "...",
  "entry_point": "..."
}

Usage

from datasets import load_dataset
ds = load_dataset("S3IC/humaneval")

Source

This dataset is taken from the public Humaneval release: https://github.com/openai/human-eval

License

Humaneval is released under the original license provided by the authors. See the LICENSE file for more details.