Update README.md
Browse files
README.md
CHANGED
|
@@ -44,16 +44,16 @@ configs:
|
|
| 44 |
- config_name: humaneval_splits
|
| 45 |
data_files:
|
| 46 |
- split: train
|
| 47 |
-
path: humaneval_splits/train*
|
| 48 |
- split: test
|
| 49 |
-
path: humaneval_splits/test*
|
| 50 |
default: true
|
| 51 |
---
|
| 52 |
|
| 53 |
# Dataset Card for OpenAI HumanEval
|
| 54 |
|
| 55 |
## Table of Contents
|
| 56 |
-
- [
|
| 57 |
- [Table of Contents](#table-of-contents)
|
| 58 |
- [Dataset Description](#dataset-description)
|
| 59 |
- [Dataset Summary](#dataset-summary)
|
|
@@ -103,9 +103,13 @@ from datasets import load_dataset
|
|
| 103 |
load_dataset("openai_humaneval")
|
| 104 |
|
| 105 |
DatasetDict({
|
|
|
|
|
|
|
|
|
|
|
|
|
| 106 |
test: Dataset({
|
| 107 |
features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point'],
|
| 108 |
-
num_rows:
|
| 109 |
})
|
| 110 |
})
|
| 111 |
```
|
|
@@ -135,7 +139,7 @@ An example of a dataset instance:
|
|
| 135 |
|
| 136 |
### Data Splits
|
| 137 |
|
| 138 |
-
The dataset
|
| 139 |
|
| 140 |
## Dataset Creation
|
| 141 |
|
|
@@ -188,7 +192,7 @@ With this dataset code generating models can be better evaluated which leads to
|
|
| 188 |
## Additional Information
|
| 189 |
|
| 190 |
### Dataset Curators
|
| 191 |
-
|
| 192 |
|
| 193 |
### Licensing Information
|
| 194 |
|
|
@@ -204,8 +208,4 @@ MIT License
|
|
| 204 |
archivePrefix={arXiv},
|
| 205 |
primaryClass={cs.LG}
|
| 206 |
}
|
| 207 |
-
```
|
| 208 |
-
|
| 209 |
-
### Contributions
|
| 210 |
-
|
| 211 |
-
Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset.
|
|
|
|
| 44 |
- config_name: humaneval_splits
|
| 45 |
data_files:
|
| 46 |
- split: train
|
| 47 |
+
path: iskhare/humaneval_splits/train*
|
| 48 |
- split: test
|
| 49 |
+
path: iskhare/humaneval_splits/test*
|
| 50 |
default: true
|
| 51 |
---
|
| 52 |
|
| 53 |
# Dataset Card for OpenAI HumanEval
|
| 54 |
|
| 55 |
## Table of Contents
|
| 56 |
+
- [HumanEval with Splits](#openai-humaneval)
|
| 57 |
- [Table of Contents](#table-of-contents)
|
| 58 |
- [Dataset Description](#dataset-description)
|
| 59 |
- [Dataset Summary](#dataset-summary)
|
|
|
|
| 103 |
load_dataset("openai_humaneval")
|
| 104 |
|
| 105 |
DatasetDict({
|
| 106 |
+
train: Dataset({
|
| 107 |
+
features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point'],
|
| 108 |
+
num_rows: 32
|
| 109 |
+
})
|
| 110 |
test: Dataset({
|
| 111 |
features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point'],
|
| 112 |
+
num_rows: 132
|
| 113 |
})
|
| 114 |
})
|
| 115 |
```
|
|
|
|
| 139 |
|
| 140 |
### Data Splits
|
| 141 |
|
| 142 |
+
The dataset consists of a train split with 32 samples and a test split with 132 samples.
|
| 143 |
|
| 144 |
## Dataset Creation
|
| 145 |
|
|
|
|
| 192 |
## Additional Information
|
| 193 |
|
| 194 |
### Dataset Curators
|
| 195 |
+
Ishan Khare
|
| 196 |
|
| 197 |
### Licensing Information
|
| 198 |
|
|
|
|
| 208 |
archivePrefix={arXiv},
|
| 209 |
primaryClass={cs.LG}
|
| 210 |
}
|
| 211 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|