Datasets:

kalikako commited on
Commit
bdbf79b
·
verified ·
1 Parent(s): 7dac6eb

Upload humaneval dataset and README

Browse files
Files changed (2) hide show
  1. README.md +52 -0
  2. humaneval.jsonl +0 -0
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: "Humaneval"
3
+ license: "mit"
4
+ language:
5
+ - en
6
+ tags:
7
+ - code-generation
8
+ - python
9
+ - text
10
+ task_categories:
11
+ - text-generation
12
+ ---
13
+
14
+
15
+
16
+ # Humaneval
17
+
18
+ This repository hosts a copy of the widely used **Humaneval** dataset,
19
+ a benchmark for evaluating the code generation capabilities and performance of Large Language Models (LLMs).
20
+
21
+ Humaneval consists of programming tasks designed to evaluate the ability of models to generate working Python code based on textual descriptions. It is frequently used in research papers assessing LLMs' code generation performance, particularly in the context of automated programming.
22
+
23
+ ## Contents
24
+
25
+ - `humaneval.jsonl` (or your actual filename): the standard set of programming tasks.
26
+
27
+ Each entry contains:
28
+
29
+ ```
30
+ {
31
+ "task_id": "...",
32
+ "prompt": "...",
33
+ "canonical_solution": "...",
34
+ "test": "...",
35
+ "entry_point": "..."
36
+ }
37
+ ```
38
+
39
+ ## Usage
40
+
41
+ ```
42
+ from datasets import load_dataset
43
+ ds = load_dataset("S3IC/humaneval")
44
+ ```
45
+
46
+ ## Source
47
+
48
+ This dataset is taken from the public Humaneval release: https://github.com/openai/human-eval
49
+
50
+ ## License
51
+
52
+ Humaneval is released under the original license provided by the authors. See the LICENSE file for more details.
humaneval.jsonl ADDED
The diff for this file is too large to render. See raw diff