GabrielMroue parquet-converter commited on
Commit
7be2d45
·
verified ·
0 Parent(s):

Duplicate from openai/openai_humaneval

Browse files

Co-authored-by: Parquet-converter (BOT) <parquet-converter@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - en
8
+ license:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - n<1K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text2text-generation
18
+ task_ids: []
19
+ paperswithcode_id: humaneval
20
+ pretty_name: OpenAI HumanEval
21
+ tags:
22
+ - code-generation
23
+ dataset_info:
24
+ config_name: openai_humaneval
25
+ features:
26
+ - name: task_id
27
+ dtype: string
28
+ - name: prompt
29
+ dtype: string
30
+ - name: canonical_solution
31
+ dtype: string
32
+ - name: test
33
+ dtype: string
34
+ - name: entry_point
35
+ dtype: string
36
+ splits:
37
+ - name: test
38
+ num_bytes: 194394
39
+ num_examples: 164
40
+ download_size: 83920
41
+ dataset_size: 194394
42
+ configs:
43
+ - config_name: openai_humaneval
44
+ data_files:
45
+ - split: test
46
+ path: openai_humaneval/test-*
47
+ default: true
48
+ ---
49
+
50
+ # Dataset Card for OpenAI HumanEval
51
+
52
+ ## Table of Contents
53
+ - [OpenAI HumanEval](#openai-humaneval)
54
+ - [Table of Contents](#table-of-contents)
55
+ - [Dataset Description](#dataset-description)
56
+ - [Dataset Summary](#dataset-summary)
57
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
58
+ - [Languages](#languages)
59
+ - [Dataset Structure](#dataset-structure)
60
+ - [Data Instances](#data-instances)
61
+ - [Data Fields](#data-fields)
62
+ - [Data Splits](#data-splits)
63
+ - [Dataset Creation](#dataset-creation)
64
+ - [Curation Rationale](#curation-rationale)
65
+ - [Source Data](#source-data)
66
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
67
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
68
+ - [Annotations](#annotations)
69
+ - [Annotation process](#annotation-process)
70
+ - [Who are the annotators?](#who-are-the-annotators)
71
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
72
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
73
+ - [Social Impact of Dataset](#social-impact-of-dataset)
74
+ - [Discussion of Biases](#discussion-of-biases)
75
+ - [Other Known Limitations](#other-known-limitations)
76
+ - [Additional Information](#additional-information)
77
+ - [Dataset Curators](#dataset-curators)
78
+ - [Licensing Information](#licensing-information)
79
+ - [Citation Information](#citation-information)
80
+ - [Contributions](#contributions)
81
+
82
+ ## Dataset Description
83
+
84
+ - **Repository:** [GitHub Repository](https://github.com/openai/human-eval)
85
+ - **Paper:** [Evaluating Large Language Models Trained on Code](https://arxiv.org/abs/2107.03374)
86
+
87
+ ### Dataset Summary
88
+
89
+ The HumanEval dataset released by OpenAI includes 164 programming problems with a function sig- nature, docstring, body, and several unit tests. They were handwritten to ensure not to be included in the training set of code generation models.
90
+
91
+ ### Supported Tasks and Leaderboards
92
+
93
+ ### Languages
94
+ The programming problems are written in Python and contain English natural text in comments and docstrings.
95
+
96
+ ## Dataset Structure
97
+
98
+ ```python
99
+ from datasets import load_dataset
100
+ load_dataset("openai_humaneval")
101
+
102
+ DatasetDict({
103
+ test: Dataset({
104
+ features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point'],
105
+ num_rows: 164
106
+ })
107
+ })
108
+ ```
109
+
110
+ ### Data Instances
111
+
112
+ An example of a dataset instance:
113
+
114
+ ```
115
+ {
116
+ "task_id": "test/0",
117
+ "prompt": "def return1():\n",
118
+ "canonical_solution": " return 1",
119
+ "test": "def check(candidate):\n assert candidate() == 1",
120
+ "entry_point": "return1"
121
+ }
122
+ ```
123
+
124
+ ### Data Fields
125
+
126
+ - `task_id`: identifier for the data sample
127
+ - `prompt`: input for the model containing function header and docstrings
128
+ - `canonical_solution`: solution for the problem in the `prompt`
129
+ - `test`: contains function to test generated code for correctness
130
+ - `entry_point`: entry point for test
131
+
132
+
133
+ ### Data Splits
134
+
135
+ The dataset only consists of a test split with 164 samples.
136
+
137
+ ## Dataset Creation
138
+
139
+ ### Curation Rationale
140
+
141
+ Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
142
+
143
+ ### Source Data
144
+
145
+ The dataset was handcrafted by engineers and researchers at OpenAI.
146
+
147
+ #### Initial Data Collection and Normalization
148
+
149
+ [More Information Needed]
150
+
151
+ #### Who are the source language producers?
152
+
153
+ [More Information Needed]
154
+
155
+ ### Annotations
156
+
157
+ [More Information Needed]
158
+
159
+ #### Annotation process
160
+
161
+ [More Information Needed]
162
+
163
+ #### Who are the annotators?
164
+
165
+ [More Information Needed]
166
+
167
+ ### Personal and Sensitive Information
168
+
169
+ None.
170
+
171
+ ## Considerations for Using the Data
172
+ Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.
173
+
174
+ ### Social Impact of Dataset
175
+ With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
176
+
177
+ ### Discussion of Biases
178
+
179
+ [More Information Needed]
180
+
181
+ ### Other Known Limitations
182
+
183
+ [More Information Needed]
184
+
185
+ ## Additional Information
186
+
187
+ ### Dataset Curators
188
+ OpenAI
189
+
190
+ ### Licensing Information
191
+
192
+ MIT License
193
+
194
+ ### Citation Information
195
+ ```
196
+ @misc{chen2021evaluating,
197
+ title={Evaluating Large Language Models Trained on Code},
198
+ author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
199
+ year={2021},
200
+ eprint={2107.03374},
201
+ archivePrefix={arXiv},
202
+ primaryClass={cs.LG}
203
+ }
204
+ ```
205
+
206
+ ### Contributions
207
+
208
+ Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset.
openai_humaneval/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f2871a15fbc95b6c683043359f4ed8e144c5a1c4f24f25f66bc51f598dfcfb6
3
+ size 83920