Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
License:
neevparikh commited on
Commit
fe70a64
·
verified ·
1 Parent(s): 708f665

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +177 -35
README.md CHANGED
@@ -1,35 +1,177 @@
1
- ---
2
- license: mit
3
- dataset_info:
4
- features:
5
- - name: problem_id
6
- dtype: int64
7
- - name: question
8
- dtype: string
9
- - name: solutions
10
- dtype: string
11
- - name: input_output
12
- dtype: string
13
- - name: difficulty
14
- dtype: string
15
- - name: url
16
- dtype: string
17
- - name: starter_code
18
- dtype: string
19
- splits:
20
- - name: train
21
- num_bytes: 103144035
22
- num_examples: 5000
23
- - name: test
24
- num_bytes: 1226206337
25
- num_examples: 5000
26
- download_size: 788737903
27
- dataset_size: 1329350372
28
- configs:
29
- - config_name: default
30
- data_files:
31
- - split: train
32
- path: data/train-*
33
- - split: test
34
- path: data/test-*
35
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ dataset_info:
4
+ features:
5
+ - name: problem_id
6
+ dtype: int64
7
+ - name: question
8
+ dtype: string
9
+ - name: solutions
10
+ dtype: string
11
+ - name: input_output
12
+ dtype: string
13
+ - name: difficulty
14
+ dtype: string
15
+ - name: url
16
+ dtype: string
17
+ - name: starter_code
18
+ dtype: string
19
+ splits:
20
+ - name: train
21
+ num_bytes: 103144035
22
+ num_examples: 5000
23
+ - name: test
24
+ num_bytes: 1226206337
25
+ num_examples: 5000
26
+ download_size: 788737903
27
+ dataset_size: 1329350372
28
+ configs:
29
+ - config_name: default
30
+ data_files:
31
+ - split: train
32
+ path: data/train-*
33
+ - split: test
34
+ path: data/test-*
35
+ ---
36
+
37
+ # NOTE
38
+
39
+ This is the same dataset as the [original APPS dataset](https://huggingface.co/datasets/codeparrot/apps/blob/main/README.md) but the original repo does not work with datasets >= 4.0 as it uses a dataset script. There's an open PR to move to parquet but it has been open for over a year, and it seems unlikely that it will be merged. As a result, this repository exists under the same MIT license to work with new dataset versions. All credit should go to the original authors of the dataset (and the README below is copied from the original repo).
40
+
41
+ # APPS Dataset
42
+
43
+ ## Dataset Description
44
+ [APPS](https://arxiv.org/abs/2105.09938) is a benchmark for code generation with 10000 problems. It can be used to evaluate the ability of language models to generate code from natural language specifications.
45
+ You can also find **APPS metric** in the hub here [codeparrot/apps_metric](https://huggingface.co/spaces/codeparrot/apps_metric).
46
+
47
+ ## Languages
48
+
49
+ The dataset contains questions in English and code solutions in Python.
50
+
51
+ ## Dataset Structure
52
+
53
+ ```python
54
+ from datasets import load_dataset
55
+ load_dataset("codeparrot/apps")
56
+
57
+ DatasetDict({
58
+ train: Dataset({
59
+ features: ['problem_id', 'question', 'solutions', 'input_output', 'difficulty', 'url', 'starter_code'],
60
+ num_rows: 5000
61
+ })
62
+ test: Dataset({
63
+ features: ['problem_id', 'question', 'solutions', 'input_output', 'difficulty', 'url', 'starter_code'],
64
+ num_rows: 5000
65
+ })
66
+ })
67
+ ```
68
+
69
+ ### How to use it
70
+
71
+ You can load and iterate through the dataset with the following two lines of code for the train split:
72
+
73
+ ```python
74
+ from datasets import load_dataset
75
+ import json
76
+
77
+ ds = load_dataset("codeparrot/apps", split="train")
78
+ sample = next(iter(ds))
79
+ # non-empty solutions and input_output features can be parsed from text format this way:
80
+ sample["solutions"] = json.loads(sample["solutions"])
81
+ sample["input_output"] = json.loads(sample["input_output"])
82
+ print(sample)
83
+
84
+ #OUTPUT:
85
+ {
86
+ 'problem_id': 0,
87
+ 'question': 'Polycarp has $n$ different binary words. A word called binary if it contains only characters \'0\' and \'1\'. For example...',
88
+ 'solutions': ["for _ in range(int(input())):\n n = int(input())\n mass = []\n zo = 0\n oz = 0\n zz = 0\n oo = 0\n...",...],
89
+ 'input_output': {'inputs': ['4\n4\n0001\n1000\n0011\n0111\n3\n010\n101\n0\n2\n00000\n00001\n4\n01\n001\n0001\n00001\n'],
90
+ 'outputs': ['1\n3 \n-1\n0\n\n2\n1 2 \n']},
91
+ 'difficulty': 'interview',
92
+ 'url': 'https://codeforces.com/problemset/problem/1259/D',
93
+ 'starter_code': ''}
94
+ }
95
+ ```
96
+ Each sample consists of a programming problem formulation in English, some ground truth Python solutions, test cases that are defined by their inputs and outputs and function name if provided, as well as some metadata regarding the difficulty level of the problem and its source.
97
+
98
+ If a sample has non empty `input_output` feature, you can read it as a dictionary with keys `inputs` and `outputs` and `fn_name` if it exists, and similarily you can parse the solutions into a list of solutions as shown in the code above.
99
+
100
+ You can also filter the dataset for the difficulty level: Introductory, Interview and Competition. Just pass the list of difficulties as a list. E.g. if you want the most challenging problems, you need to select the competition level:
101
+
102
+ ```python
103
+ ds = load_dataset("codeparrot/apps", split="train", difficulties=["competition"])
104
+ print(next(iter(ds))["question"])
105
+
106
+ #OUTPUT:
107
+ """\
108
+ Codefortia is a small island country located somewhere in the West Pacific. It consists of $n$ settlements connected by
109
+ ...
110
+
111
+ For each settlement $p = 1, 2, \dots, n$, can you tell what is the minimum time required to travel between the king's residence and the parliament house (located in settlement $p$) after some roads are abandoned?
112
+
113
+ -----Input-----
114
+
115
+ The first line of the input contains four integers $n$, $m$, $a$ and $b$
116
+ ...
117
+
118
+ -----Output-----
119
+
120
+ Output a single line containing $n$ integers
121
+ ...
122
+
123
+ -----Examples-----
124
+ Input
125
+ 5 5 20 25
126
+ 1 2 25
127
+ ...
128
+
129
+ Output
130
+ 0 25 60 40 20
131
+ ...
132
+ ```
133
+
134
+ ### Data Fields
135
+
136
+ |Field|Type|Description|
137
+ |---|---|---|
138
+ |problem_id|int|problem id|
139
+ |question|string|problem description|
140
+ |solutions|string|some python solutions|
141
+ |input_output|string|Json string with "inputs" and "outputs" of the test cases, might also include "fn_name" the name of the function|
142
+ |difficulty|string|difficulty level of the problem|
143
+ |url|string|url of the source of the problem|
144
+ |starter_code|string|starter code to include in prompts|
145
+
146
+ we mention that only few samples have `fn_name` and `starter_code` specified
147
+
148
+ ### Data Splits
149
+
150
+ The dataset contains a train and test splits with 5000 samples each.
151
+
152
+ ### Dataset Statistics
153
+ * 10000 coding problems
154
+ * 131777 test cases
155
+ * all problems have a least one test case except 195 samples in the train split
156
+ * for tests split, the average number of test cases is 21.2
157
+ * average length of a problem is 293.2 words
158
+ * all files have ground-truth solutions except 1235 samples in the test split
159
+
160
+ ## Dataset Creation
161
+
162
+ To create the APPS dataset, the authors manually curated problems from open-access sites where programmers share problems with each other, including Codewars, AtCoder, Kattis, and Codeforces. For more details please refer to the original [paper](https://arxiv.org/pdf/2105.09938.pdf).
163
+
164
+ ## Considerations for Using the Data
165
+
166
+ In [AlphaCode](https://arxiv.org/pdf/2203.07814v1.pdf) the authors found that this dataset can generate many false positives during evaluation, where incorrect submissions are marked as correct due to lack of test coverage.
167
+
168
+ ## Citation Information
169
+
170
+ ```
171
+ @article{hendrycksapps2021,
172
+ title={Measuring Coding Challenge Competence With APPS},
173
+ author={Dan Hendrycks and Steven Basart and Saurav Kadavath and Mantas Mazeika and Akul Arora and Ethan Guo and Collin Burns and Samir Puranik and Horace He and Dawn Song and Jacob Steinhardt},
174
+ journal={NeurIPS},
175
+ year={2021}
176
+ }
177
+ ```