Ayushnielit parquet-converter commited on
Commit
6d849f2
Β·
verified Β·
0 Parent(s):

Duplicate from code-search-net/code_search_net

Browse files

Co-authored-by: Parquet-converter (BOT) <parquet-converter@users.noreply.huggingface.co>

Files changed (9) hide show
  1. .gitattributes +27 -0
  2. README.md +468 -0
  3. code_search_net.py +218 -0
  4. data/go.zip +3 -0
  5. data/java.zip +3 -0
  6. data/javascript.zip +3 -0
  7. data/php.zip +3 -0
  8. data/python.zip +3 -0
  9. data/ruby.zip +3 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,468 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - machine-generated
6
+ language:
7
+ - code
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - multilingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ - 10K<n<100K
15
+ - 1M<n<10M
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - text-generation
20
+ - fill-mask
21
+ task_ids:
22
+ - language-modeling
23
+ - masked-language-modeling
24
+ paperswithcode_id: codesearchnet
25
+ pretty_name: CodeSearchNet
26
+ dataset_info:
27
+ - config_name: all
28
+ features:
29
+ - name: repository_name
30
+ dtype: string
31
+ - name: func_path_in_repository
32
+ dtype: string
33
+ - name: func_name
34
+ dtype: string
35
+ - name: whole_func_string
36
+ dtype: string
37
+ - name: language
38
+ dtype: string
39
+ - name: func_code_string
40
+ dtype: string
41
+ - name: func_code_tokens
42
+ sequence: string
43
+ - name: func_documentation_string
44
+ dtype: string
45
+ - name: func_documentation_tokens
46
+ sequence: string
47
+ - name: split_name
48
+ dtype: string
49
+ - name: func_code_url
50
+ dtype: string
51
+ splits:
52
+ - name: train
53
+ num_bytes: 5850604083
54
+ num_examples: 1880853
55
+ - name: test
56
+ num_bytes: 308626333
57
+ num_examples: 100529
58
+ - name: validation
59
+ num_bytes: 274564382
60
+ num_examples: 89154
61
+ download_size: 5117370511
62
+ dataset_size: 6433794798
63
+ - config_name: java
64
+ features:
65
+ - name: repository_name
66
+ dtype: string
67
+ - name: func_path_in_repository
68
+ dtype: string
69
+ - name: func_name
70
+ dtype: string
71
+ - name: whole_func_string
72
+ dtype: string
73
+ - name: language
74
+ dtype: string
75
+ - name: func_code_string
76
+ dtype: string
77
+ - name: func_code_tokens
78
+ sequence: string
79
+ - name: func_documentation_string
80
+ dtype: string
81
+ - name: func_documentation_tokens
82
+ sequence: string
83
+ - name: split_name
84
+ dtype: string
85
+ - name: func_code_url
86
+ dtype: string
87
+ splits:
88
+ - name: train
89
+ num_bytes: 1429272535
90
+ num_examples: 454451
91
+ - name: test
92
+ num_bytes: 82377246
93
+ num_examples: 26909
94
+ - name: validation
95
+ num_bytes: 42358315
96
+ num_examples: 15328
97
+ download_size: 1060569153
98
+ dataset_size: 1554008096
99
+ - config_name: go
100
+ features:
101
+ - name: repository_name
102
+ dtype: string
103
+ - name: func_path_in_repository
104
+ dtype: string
105
+ - name: func_name
106
+ dtype: string
107
+ - name: whole_func_string
108
+ dtype: string
109
+ - name: language
110
+ dtype: string
111
+ - name: func_code_string
112
+ dtype: string
113
+ - name: func_code_tokens
114
+ sequence: string
115
+ - name: func_documentation_string
116
+ dtype: string
117
+ - name: func_documentation_tokens
118
+ sequence: string
119
+ - name: split_name
120
+ dtype: string
121
+ - name: func_code_url
122
+ dtype: string
123
+ splits:
124
+ - name: train
125
+ num_bytes: 738153234
126
+ num_examples: 317832
127
+ - name: test
128
+ num_bytes: 32286998
129
+ num_examples: 14291
130
+ - name: validation
131
+ num_bytes: 26888527
132
+ num_examples: 14242
133
+ download_size: 487525935
134
+ dataset_size: 797328759
135
+ - config_name: python
136
+ features:
137
+ - name: repository_name
138
+ dtype: string
139
+ - name: func_path_in_repository
140
+ dtype: string
141
+ - name: func_name
142
+ dtype: string
143
+ - name: whole_func_string
144
+ dtype: string
145
+ - name: language
146
+ dtype: string
147
+ - name: func_code_string
148
+ dtype: string
149
+ - name: func_code_tokens
150
+ sequence: string
151
+ - name: func_documentation_string
152
+ dtype: string
153
+ - name: func_documentation_tokens
154
+ sequence: string
155
+ - name: split_name
156
+ dtype: string
157
+ - name: func_code_url
158
+ dtype: string
159
+ splits:
160
+ - name: train
161
+ num_bytes: 1559645310
162
+ num_examples: 412178
163
+ - name: test
164
+ num_bytes: 84342064
165
+ num_examples: 22176
166
+ - name: validation
167
+ num_bytes: 92154786
168
+ num_examples: 23107
169
+ download_size: 940909997
170
+ dataset_size: 1736142160
171
+ - config_name: javascript
172
+ features:
173
+ - name: repository_name
174
+ dtype: string
175
+ - name: func_path_in_repository
176
+ dtype: string
177
+ - name: func_name
178
+ dtype: string
179
+ - name: whole_func_string
180
+ dtype: string
181
+ - name: language
182
+ dtype: string
183
+ - name: func_code_string
184
+ dtype: string
185
+ - name: func_code_tokens
186
+ sequence: string
187
+ - name: func_documentation_string
188
+ dtype: string
189
+ - name: func_documentation_tokens
190
+ sequence: string
191
+ - name: split_name
192
+ dtype: string
193
+ - name: func_code_url
194
+ dtype: string
195
+ splits:
196
+ - name: train
197
+ num_bytes: 480286523
198
+ num_examples: 123889
199
+ - name: test
200
+ num_bytes: 24056972
201
+ num_examples: 6483
202
+ - name: validation
203
+ num_bytes: 30168242
204
+ num_examples: 8253
205
+ download_size: 1664713350
206
+ dataset_size: 534511737
207
+ - config_name: ruby
208
+ features:
209
+ - name: repository_name
210
+ dtype: string
211
+ - name: func_path_in_repository
212
+ dtype: string
213
+ - name: func_name
214
+ dtype: string
215
+ - name: whole_func_string
216
+ dtype: string
217
+ - name: language
218
+ dtype: string
219
+ - name: func_code_string
220
+ dtype: string
221
+ - name: func_code_tokens
222
+ sequence: string
223
+ - name: func_documentation_string
224
+ dtype: string
225
+ - name: func_documentation_tokens
226
+ sequence: string
227
+ - name: split_name
228
+ dtype: string
229
+ - name: func_code_url
230
+ dtype: string
231
+ splits:
232
+ - name: train
233
+ num_bytes: 110681715
234
+ num_examples: 48791
235
+ - name: test
236
+ num_bytes: 5359280
237
+ num_examples: 2279
238
+ - name: validation
239
+ num_bytes: 4830744
240
+ num_examples: 2209
241
+ download_size: 111758028
242
+ dataset_size: 120871739
243
+ - config_name: php
244
+ features:
245
+ - name: repository_name
246
+ dtype: string
247
+ - name: func_path_in_repository
248
+ dtype: string
249
+ - name: func_name
250
+ dtype: string
251
+ - name: whole_func_string
252
+ dtype: string
253
+ - name: language
254
+ dtype: string
255
+ - name: func_code_string
256
+ dtype: string
257
+ - name: func_code_tokens
258
+ sequence: string
259
+ - name: func_documentation_string
260
+ dtype: string
261
+ - name: func_documentation_tokens
262
+ sequence: string
263
+ - name: split_name
264
+ dtype: string
265
+ - name: func_code_url
266
+ dtype: string
267
+ splits:
268
+ - name: train
269
+ num_bytes: 1532564870
270
+ num_examples: 523712
271
+ - name: test
272
+ num_bytes: 80203877
273
+ num_examples: 28391
274
+ - name: validation
275
+ num_bytes: 78163924
276
+ num_examples: 26015
277
+ download_size: 851894048
278
+ dataset_size: 1690932671
279
+ config_names:
280
+ - all
281
+ - go
282
+ - java
283
+ - javascript
284
+ - php
285
+ - python
286
+ - ruby
287
+ ---
288
+
289
+ # Dataset Card for CodeSearchNet corpus
290
+
291
+ ## Table of Contents
292
+ - [Dataset Description](#dataset-description)
293
+ - [Dataset Summary](#dataset-summary)
294
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
295
+ - [Languages](#languages)
296
+ - [Dataset Structure](#dataset-structure)
297
+ - [Data Instances](#data-instances)
298
+ - [Data Fields](#data-fields)
299
+ - [Data Splits](#data-splits)
300
+ - [Dataset Creation](#dataset-creation)
301
+ - [Curation Rationale](#curation-rationale)
302
+ - [Source Data](#source-data)
303
+ - [Annotations](#annotations)
304
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
305
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
306
+ - [Social Impact of Dataset](#social-impact-of-dataset)
307
+ - [Discussion of Biases](#discussion-of-biases)
308
+ - [Other Known Limitations](#other-known-limitations)
309
+ - [Additional Information](#additional-information)
310
+ - [Dataset Curators](#dataset-curators)
311
+ - [Licensing Information](#licensing-information)
312
+ - [Citation Information](#citation-information)
313
+ - [Contributions](#contributions)
314
+
315
+ ## Dataset Description
316
+ - **Homepage:** https://wandb.ai/github/CodeSearchNet/benchmark
317
+ - **Repository:** https://github.com/github/CodeSearchNet
318
+ - **Paper:** https://arxiv.org/abs/1909.09436
319
+ - **Data:** https://doi.org/10.5281/zenodo.7908468
320
+ - **Leaderboard:** https://wandb.ai/github/CodeSearchNet/benchmark/leaderboard
321
+
322
+ ### Dataset Summary
323
+
324
+ CodeSearchNet corpus is a dataset of 2 milllion (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages.
325
+
326
+ CodeSearchNet corpus was gathered to support the [CodeSearchNet challenge](https://wandb.ai/github/CodeSearchNet/benchmark), to explore the problem of code retrieval using natural language.
327
+
328
+ ### Supported Tasks and Leaderboards
329
+
330
+ - `language-modeling`: The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages.
331
+
332
+ ### Languages
333
+
334
+ - Go **programming** language
335
+ - Java **programming** language
336
+ - Javascript **programming** language
337
+ - PHP **programming** language
338
+ - Python **programming** language
339
+ - Ruby **programming** language
340
+
341
+ ## Dataset Structure
342
+
343
+ ### Data Instances
344
+
345
+ A data point consists of a function code along with its documentation. Each data point also contains meta data on the function, such as the repository it was extracted from.
346
+ ```
347
+ {
348
+ 'id': '0',
349
+ 'repository_name': 'organisation/repository',
350
+ 'func_path_in_repository': 'src/path/to/file.py',
351
+ 'func_name': 'func',
352
+ 'whole_func_string': 'def func(args):\n"""Docstring"""\n [...]',
353
+ 'language': 'python',
354
+ 'func_code_string': '[...]',
355
+ 'func_code_tokens': ['def', 'func', '(', 'args', ')', ...],
356
+ 'func_documentation_string': 'Docstring',
357
+ 'func_documentation_string_tokens': ['Docstring'],
358
+ 'split_name': 'train',
359
+ 'func_code_url': 'https://github.com/<org>/<repo>/blob/<hash>/src/path/to/file.py#L111-L150'
360
+ }
361
+ ```
362
+ ### Data Fields
363
+
364
+ - `id`: Arbitrary number
365
+ - `repository_name`: name of the GitHub repository
366
+ - `func_path_in_repository`: tl;dr: path to the file which holds the function in the repository
367
+ - `func_name`: name of the function in the file
368
+ - `whole_func_string`: Code + documentation of the function
369
+ - `language`: Programming language in whoch the function is written
370
+ - `func_code_string`: Function code
371
+ - `func_code_tokens`: Tokens yielded by Treesitter
372
+ - `func_documentation_string`: Function documentation
373
+ - `func_documentation_string_tokens`: Tokens yielded by Treesitter
374
+ - `split_name`: Name of the split to which the example belongs (one of train, test or valid)
375
+ - `func_code_url`: URL to the function code on Github
376
+
377
+ ### Data Splits
378
+
379
+ Three splits are available:
380
+ - train
381
+ - test
382
+ - valid
383
+
384
+ ## Dataset Creation
385
+
386
+ ### Curation Rationale
387
+
388
+ [More Information Needed]
389
+
390
+ ### Source Data
391
+
392
+ #### Initial Data Collection and Normalization
393
+
394
+ All information can be retrieved in the [original technical review](https://arxiv.org/pdf/1909.09436.pdf)
395
+
396
+ **Corpus collection**:
397
+
398
+ Corpus has been collected from publicly available open-source non-fork GitHub repositories, using libraries.io to identify all projects which are used by at least one other project, and sort them by β€œpopularity” as indicated by the number of stars and forks.
399
+
400
+ Then, any projects that do not have a license or whose license does not explicitly permit the re-distribution of parts of the project were removed. Treesitter - GitHub's universal parser - has been used to then tokenize all Go, Java, JavaScript, Python, PHP and Ruby functions (or methods) using and, where available, their respective documentation text using a heuristic regular expression.
401
+
402
+ **Corpus filtering**:
403
+
404
+ Functions without documentation are removed from the corpus. This yields a set of pairs ($c_i$, $d_i$) where ci is some function documented by di. Pairs ($c_i$, $d_i$) are passed through the folllowing preprocessing tasks:
405
+
406
+ - Documentation $d_i$ is truncated to the first full paragraph to remove in-depth discussion of function arguments and return values
407
+ - Pairs in which $d_i$ is shorter than three tokens are removed
408
+ - Functions $c_i$ whose implementation is shorter than three lines are removed
409
+ - Functions whose name contains the substring β€œtest” are removed
410
+ - Constructors and standard extenion methods (eg `__str__` in Python or `toString` in Java) are removed
411
+ - Duplicates and near duplicates functions are removed, in order to keep only one version of the function
412
+
413
+ #### Who are the source language producers?
414
+
415
+ OpenSource contributors produced the code and documentations.
416
+
417
+ The dataset was gatherered and preprocessed automatically.
418
+
419
+ ### Annotations
420
+
421
+ #### Annotation process
422
+
423
+ [More Information Needed]
424
+
425
+ #### Who are the annotators?
426
+
427
+ [More Information Needed]
428
+
429
+ ### Personal and Sensitive Information
430
+
431
+ [More Information Needed]
432
+
433
+ ## Considerations for Using the Data
434
+
435
+ ### Social Impact of Dataset
436
+
437
+ [More Information Needed]
438
+
439
+ ### Discussion of Biases
440
+
441
+ [More Information Needed]
442
+
443
+ ### Other Known Limitations
444
+
445
+ [More Information Needed]
446
+
447
+ ## Additional Information
448
+
449
+ ### Dataset Curators
450
+
451
+ [More Information Needed]
452
+
453
+ ### Licensing Information
454
+
455
+ Each example in the dataset has is extracted from a GitHub repository, and each repository has its own license. Example-wise license information is not (yet) included in this dataset: you will need to find out yourself which license the code is using.
456
+
457
+ ### Citation Information
458
+
459
+ @article{husain2019codesearchnet,
460
+ title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},
461
+ author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
462
+ journal={arXiv preprint arXiv:1909.09436},
463
+ year={2019}
464
+ }
465
+
466
+ ### Contributions
467
+
468
+ Thanks to [@SBrandeis](https://github.com/SBrandeis) for adding this dataset.
code_search_net.py ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """CodeSearchNet corpus: proxy dataset for semantic code search"""
18
+
19
+ # TODO: add licensing info in the examples
20
+ # TODO: log richer informations (especially while extracting the jsonl.gz files)
21
+ # TODO: enable custom configs; such as: "java+python"
22
+ # TODO: enable fetching examples with a given license, eg: "java_MIT"
23
+
24
+
25
+ import json
26
+ import os
27
+
28
+ import datasets
29
+
30
+
31
+ _CITATION = """\
32
+ @article{husain2019codesearchnet,
33
+ title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},
34
+ author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
35
+ journal={arXiv preprint arXiv:1909.09436},
36
+ year={2019}
37
+ }
38
+ """
39
+
40
+ _DESCRIPTION = """\
41
+ CodeSearchNet corpus contains about 6 million functions from open-source code \
42
+ spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). \
43
+ The CodeSearchNet Corpus also contains automatically generated query-like \
44
+ natural language for 2 million functions, obtained from mechanically scraping \
45
+ and preprocessing associated function documentation.
46
+ """
47
+
48
+ _HOMEPAGE = "https://github.com/github/CodeSearchNet"
49
+
50
+ _LICENSE = "Various"
51
+
52
+ _DATA_DIR_URL = "data/"
53
+ _AVAILABLE_LANGUAGES = ["python", "java", "javascript", "go", "ruby", "php"]
54
+ _URLs = {language: _DATA_DIR_URL + f"{language}.zip" for language in _AVAILABLE_LANGUAGES}
55
+ # URLs for "all" are just the concatenation of URLs for all languages
56
+ _URLs["all"] = _URLs.copy()
57
+
58
+
59
+ class CodeSearchNet(datasets.GeneratorBasedBuilder):
60
+ """ "CodeSearchNet corpus: proxy dataset for semantic code search."""
61
+
62
+ VERSION = datasets.Version("1.0.0", "Add CodeSearchNet corpus dataset")
63
+ BUILDER_CONFIGS = [
64
+ datasets.BuilderConfig(
65
+ name="all",
66
+ version=VERSION,
67
+ description="All available languages: Java, Go, Javascript, Python, PHP, Ruby",
68
+ ),
69
+ datasets.BuilderConfig(
70
+ name="java",
71
+ version=VERSION,
72
+ description="Java language",
73
+ ),
74
+ datasets.BuilderConfig(
75
+ name="go",
76
+ version=VERSION,
77
+ description="Go language",
78
+ ),
79
+ datasets.BuilderConfig(
80
+ name="python",
81
+ version=VERSION,
82
+ description="Pyhton language",
83
+ ),
84
+ datasets.BuilderConfig(
85
+ name="javascript",
86
+ version=VERSION,
87
+ description="Javascript language",
88
+ ),
89
+ datasets.BuilderConfig(
90
+ name="ruby",
91
+ version=VERSION,
92
+ description="Ruby language",
93
+ ),
94
+ datasets.BuilderConfig(
95
+ name="php",
96
+ version=VERSION,
97
+ description="PHP language",
98
+ ),
99
+ ]
100
+
101
+ DEFAULT_CONFIG_NAME = "all"
102
+
103
+ def _info(self):
104
+ return datasets.DatasetInfo(
105
+ description=_DESCRIPTION,
106
+ features=datasets.Features(
107
+ {
108
+ "repository_name": datasets.Value("string"),
109
+ "func_path_in_repository": datasets.Value("string"),
110
+ "func_name": datasets.Value("string"),
111
+ "whole_func_string": datasets.Value("string"),
112
+ "language": datasets.Value("string"),
113
+ "func_code_string": datasets.Value("string"),
114
+ "func_code_tokens": datasets.Sequence(datasets.Value("string")),
115
+ "func_documentation_string": datasets.Value("string"),
116
+ "func_documentation_tokens": datasets.Sequence(datasets.Value("string")),
117
+ "split_name": datasets.Value("string"),
118
+ "func_code_url": datasets.Value("string"),
119
+ # TODO - add licensing info in the examples
120
+ }
121
+ ),
122
+ # No default supervised keys
123
+ supervised_keys=None,
124
+ homepage=_HOMEPAGE,
125
+ license=_LICENSE,
126
+ citation=_CITATION,
127
+ )
128
+
129
+ def _split_generators(self, dl_manager):
130
+ """Returns SplitGenerators.
131
+
132
+ Note: The original data is stored in S3, and follows this unusual directory structure:
133
+ ```
134
+ .
135
+ β”œβ”€β”€ <language_name> # e.g. python
136
+ β”‚Β Β  └── final
137
+ β”‚Β Β  └── jsonl
138
+ β”‚Β οΏ½οΏ½ β”œβ”€β”€ test
139
+ β”‚Β Β  β”‚Β Β  └── <language_name>_test_0.jsonl.gz
140
+ β”‚Β Β  β”œβ”€β”€ train
141
+ β”‚Β Β  β”‚Β Β  β”œβ”€β”€ <language_name>_train_0.jsonl.gz
142
+ β”‚Β Β  β”‚Β Β  β”œβ”€β”€ <language_name>_train_1.jsonl.gz
143
+ β”‚Β Β  β”‚Β Β  β”œβ”€β”€ ...
144
+ β”‚Β Β  β”‚Β Β  └── <language_name>_train_n.jsonl.gz
145
+ β”‚Β Β  └── valid
146
+ β”‚Β Β  └── <language_name>_valid_0.jsonl.gz
147
+ β”œβ”€β”€ <language_name>_dedupe_definitions_v2.pkl
148
+ └── <language_name>_licenses.pkl
149
+ ```
150
+ """
151
+ data_urls = _URLs[self.config.name]
152
+ if isinstance(data_urls, str):
153
+ data_urls = {self.config.name: data_urls}
154
+ # Download & extract the language archives
155
+ data_dirs = [
156
+ os.path.join(directory, lang, "final", "jsonl")
157
+ for lang, directory in dl_manager.download_and_extract(data_urls).items()
158
+ ]
159
+
160
+ split2dirs = {
161
+ split_name: [os.path.join(directory, split_name) for directory in data_dirs]
162
+ for split_name in ["train", "test", "valid"]
163
+ }
164
+
165
+ split2paths = dl_manager.extract(
166
+ {
167
+ split_name: [
168
+ os.path.join(directory, entry_name)
169
+ for directory in split_dirs
170
+ for entry_name in os.listdir(directory)
171
+ ]
172
+ for split_name, split_dirs in split2dirs.items()
173
+ }
174
+ )
175
+
176
+ return [
177
+ datasets.SplitGenerator(
178
+ name=datasets.Split.TRAIN,
179
+ gen_kwargs={
180
+ "filepaths": split2paths["train"],
181
+ },
182
+ ),
183
+ datasets.SplitGenerator(
184
+ name=datasets.Split.TEST,
185
+ gen_kwargs={
186
+ "filepaths": split2paths["test"],
187
+ },
188
+ ),
189
+ datasets.SplitGenerator(
190
+ name=datasets.Split.VALIDATION,
191
+ gen_kwargs={
192
+ "filepaths": split2paths["valid"],
193
+ },
194
+ ),
195
+ ]
196
+
197
+ def _generate_examples(self, filepaths):
198
+ """Yields the examples by iterating through the available jsonl files."""
199
+ for file_id_, filepath in enumerate(filepaths):
200
+ with open(filepath, encoding="utf-8") as f:
201
+ for row_id_, row in enumerate(f):
202
+ # Key of the example = file_id + row_id,
203
+ # to ensure all examples have a distinct key
204
+ id_ = f"{file_id_}_{row_id_}"
205
+ data = json.loads(row)
206
+ yield id_, {
207
+ "repository_name": data["repo"],
208
+ "func_path_in_repository": data["path"],
209
+ "func_name": data["func_name"],
210
+ "whole_func_string": data["original_string"],
211
+ "language": data["language"],
212
+ "func_code_string": data["code"],
213
+ "func_code_tokens": data["code_tokens"],
214
+ "func_documentation_string": data["docstring"],
215
+ "func_documentation_tokens": data["docstring_tokens"],
216
+ "split_name": data["partition"],
217
+ "func_code_url": data["url"],
218
+ }
data/go.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15d23f01dc2796447e1736263e6830079289d5ef41f09988011afdcf8da6b6e5
3
+ size 487525935
data/java.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05f9204b1808413fab30f0e69229e298f6de4ad468279d53a2aa5797e3a78c17
3
+ size 1060569153
data/javascript.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdc743f5af27f90c77584a2d29e2b7f8cecdd00c37b433c385b888ee062936dd
3
+ size 1664713350
data/php.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3bbf0d1b10010f88b058faea876f1f5471758399e30d58c11f78ff53660ce00
3
+ size 851894048
data/python.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7223c6460bebfa85697b586da91e47bc5d64790a4d60bba5917106458ab6b40e
3
+ size 940909997
data/ruby.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67aee5812d0f994df745c771c7791483f2b060561495747d424e307af4b342e6
3
+ size 111758028