WizzF commited on
Commit
f1d1569
·
verified ·
1 Parent(s): 260ebaa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +189 -0
README.md CHANGED
@@ -452,3 +452,192 @@ configs:
452
  - split: train
453
  path: data/Python/train-*
454
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
452
  - split: train
453
  path: data/Python/train-*
454
  ---
455
+
456
+ # Dataset Summary
457
+
458
+ We develop a new contamination-free multilingual code dataset that facilitates LLM evaluation reproducibility.
459
+
460
+ # Collection
461
+
462
+ We collect up to **50,000** public repositories using the [GitHub API](https://docs.github.com/en/rest/search/search?apiVersion=2022-11-28), focusing on *license type*, *star count*, and *creation date*. Repositories with non-permissive licenses are prioritized to reduce contamination, as public code datasets we deduplicate against primarily focus on permissive or no-license repositories. We select repositories created before **April 2024** in decreasing order of their star counts. To handle GitHub rate limits, we use timeouts and pagination during the scraping process.
463
+
464
+ ### Copyleft licenses included in the dataset
465
+ | **License** | **Family** |
466
+ | :--------: | :-------: |
467
+ | CECILL-1.0 | Weak Copyleft |
468
+ | CECILL-1.1 | Weak Copyleft |
469
+ | CECILL-2.0 | Weak Copyleft |
470
+ | CECILL-2.1 | Weak Copyleft |
471
+ | CECILL-C | Weak Copyleft |
472
+ | EPL-1.0 | Weak Copyleft |
473
+ | EPL-2.0 | Weak Copyleft |
474
+ | LGPL-2.1 | Weak Copyleft |
475
+ | LGPL-3.0 | Weak Copyleft |
476
+ | MS-RL | Weak Copyleft |
477
+ | MPL-2.0 | Weak Copyleft |
478
+ | GPL-2.0 | Strong Copyleft |
479
+ | GPL-3.0 | Strong Copyleft |
480
+ | AGPL-3.0 | Network Copyleft |
481
+ | EUPL-1.1 | Network Copyleft |
482
+ | EUPL-1.2 | Network Copyleft |
483
+ | OSL-3.0 | Network Copyleft |
484
+
485
+ The features we extract for each repository are illustrated in the example below.
486
+
487
+ ```json
488
+ {
489
+ "id": 126178683,
490
+ "full_name": "halo-dev/halo",
491
+ "html_url": "https://github.com/halo-dev/halo",
492
+ "stargazers_count": 29115,
493
+ "forks_count": 8985,
494
+ "watchers_count": 29115,
495
+ "open_issues_count": 278,
496
+ "language": "Java",
497
+ "created_at": "2018-03-21T12:56:52Z",
498
+ "pushed_at": "2023-10-28T16:29:39Z",
499
+ "license": {
500
+ "key": "gpl-3.0",
501
+ "name": "GNU General Public License v3.0",
502
+ "spdx_id": "GPL-3.0",
503
+ "url": "https://api.github.com/licenses/gpl-3.0",
504
+ "node_id": "MDc6TGljZW5zZTk="
505
+ },
506
+ "retrieval_date": "10/30/2023, 3:24:57 PM (Europe/Amsterdam)"
507
+ }
508
+
509
+ ```
510
+
511
+ ### Repository Fields
512
+
513
+ - **id**: unique id of the repo
514
+ - **full_name**: complete name of the repo
515
+ - **html_url**: URL to the repo
516
+ - **stargazers_count**: number of stars of the repo
517
+ - **forks_count**: number of forks of the repo
518
+ - **watchers_count**: number of watchers of the repo
519
+ - **open_issues_count**: number of open issues of the repo at the extraction time
520
+ - **language**: main language of the repo
521
+ - **created_at**: creation date of the repo
522
+ - **pushed_at**: date of the most recent push to the repo until the extraction date
523
+ - **license**: license type of the repo
524
+ - **retrieval_date**: date when the repo was scraped from GitHub
525
+
526
+ We start by retrieving repositories with more than **900** stars using **two-month tumbling windows**. If we hit the **1000** repository limit per window (for a personal GitHub account), we shorten the
527
+ search space to a **one-month window** and restart the iteration. Otherwise, the window advances by two months. Once the timeframe until **April 2024** is covered, we reduce the star search space: between **900** and **100** stars, we decrease the interval by **50** (e.g. search between [900, 850]), between **100** and **10** stars, we decrease the interval by **10**, and for the last **10** stars, we decrease by **1**. Since most repositories fall within the **0-100 star range** (e.g. Figure 1 showcases the distribution of repositories with up to **500** stars for Java), using the **creation date** and **star count** filters helps us avoid API limits and scrape more data by narrowing the search space.
528
+ The creation date window can be reduced even further (week or day level), in order to extract more data. After retrieving the repositories, we extract all the files corresponding to each language. We extend the programming languages extension list used for [The Stack](https://gist.github.com/ppisarczyk/43962d06686722d26d176fad46879d41) with 4 languages: EJS, Raku, Starlark, and WebAssembly.
529
+
530
+ The final dataset structure is shown in the example below.
531
+
532
+ ```json
533
+ {
534
+ "file_name": "Font.java",
535
+ "file_path": ".../lateralgm/resources/Font.java",
536
+ "content": "*/ package org.lateralgm.resources; import java.util.EnumMap; import org.lateralgm.main.Prefs; ...",
537
+ "file_size": 1,985,
538
+ "language": "Java",
539
+ "extension": ".java",
540
+ "repo_name": "lwizchz/GameMaker-HTML5-Player",
541
+ "repo_stars": 22,
542
+ "repo_forks": 9,
543
+ "repo_open_issues": 0,
544
+ "repo_created_at": "2011-09-10T16:05:20Z",
545
+ "repo_pushed_at": "2013-05-06T23:00:17Z",
546
+ "sha": "00046809b218b2c058f4be7...",
547
+ "exact_duplicates_stackv1": False,
548
+ "exact_duplicates_stackv2": True,
549
+ "near_duplicates_stackv1": True,
550
+ "near_duplicates_stackv2": False,
551
+ ....
552
+
553
+ }
554
+
555
+ ```
556
+
557
+ ### Dataset Fields
558
+
559
+ - **file_name**: name of the file extracted from its repo
560
+ - **file_path**: path to the file in its repo
561
+ - **content**: content of the file
562
+ - **file_size**: size of the file
563
+ - **language**: language of the file
564
+ - **extension**: language extension of the file
565
+ - **repo_name**: complete name of the file's repo
566
+ - **repo_stars**: number of stars of the file's repo
567
+ - **repo_forks**: number of forks of the file's repo
568
+ - **repo_open_issues**: number of open issues of the file's repo at the extraction date
569
+ - **repo_created_at**: creation date of the file's repo
570
+ - **repo_pushed_at**: date of the most recent push to the file's repo until the extraction date
571
+ - **sha**: sha value of the file's content
572
+ - **exact_duplicates_stackv1**: boolean flag stating if there are any exact duplicate files from The Stack
573
+
574
+ <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66a89f0fd6625ead0411af50/fctcChY0DRwxMeXazUWUV.png) -->
575
+ <div style="text-align: center;">
576
+ <img src=https://cdn-uploads.huggingface.co/production/uploads/66a89f0fd6625ead0411af50/fctcChY0DRwxMeXazUWUV.png alt="Figure 1: Distribution of scraped repositories with at most 500 stars." style="display: block; margin: 0 auto; width: 600px; height: auto;" />
577
+ <p><b>Figure 1:</b> Distribution of scraped repositories with at most 500 stars.</p>
578
+ </div>
579
+
580
+
581
+ # Cleaning
582
+
583
+ The next stage in our dataset pipeline is the cleaning procedure. We exclude Java files **larger than 50 MB** and those with **fewer than 10 words**.
584
+
585
+ # Deduplication
586
+
587
+ The final stage of our dataset pipeline is the deduplication process. Firstly, we remove any potential duplicated repositories obtained due to the pagination process. We then perform **exact deduplication** between **our dataset and the Java-Stack v2**, and **within our dataset itself**, using the **sha256** function to generate hashes for each file.
588
+ We choose this hash function because it provides a uniform distribution of hash values, minimizes collisions, and ensures even distribution across the hash space.
589
+ For **near-deduplication**, we use the **MinHashLSH** algorithm from the [*datasketch1*](https://ekzhu.com/datasketch/lsh.html) library. To calculate the minhashes, we use the same hash function as above, but we extract the first 16 bytes to generate 128-bit hash values.
590
+ This approach balances the need for a strong hash function with the efficiency of a shorter hash length.
591
+
592
+ Additionally, we use **128** file permutations for LSH, with weights of **0.4** for **precision** and **0.6** for **recall**. We generate **7-character shingles** after [lowercasing the file content and removing whitespace](http://infolab.stanford.edu/~ullman/mmds/book.pdf).
593
+ We find that 7-shingles provide a reasonable trade-off between the number of shingles and the data processed, being small enough to keep the number of unique shingles manageable yet large enough to provide meaningful comparisons.
594
+ It was shown that the number of shingles should be large enough to ensure a low probability of shingles appearing across documents, with **k = 5** suggested for smaller documents such as [emails](http://infolab.stanford.edu/~ullman/mmds/book.pdf).
595
+ However, Java files usually contain a **larger dictionary** of characters than emails, including arithmetic and comparison operators which are less frequent in emails.
596
+ Thus, given the increased **complexity** and **size** of Java files, we consider 7-shingles to be appropriate to capture sufficient context, ensuring uniqueness and **reducing false positives**, which smaller shingles such as k = 5 might fail to achieve.
597
+ Furthermore, **k = 9** was shown to be a safe choice for [large research articles](http://infolab.stanford.edu/~ullman/mmds/book.pdf), however, for our needs, 7-shingles strike a balance between accuracy and
598
+ computational efficiency, crucial for handling the **Java-Stack v2’s size** of over **222 M** files. This choice provides better computational efficiency by reducing the number of comparisons while maintaining a manageable shingle space.
599
+ Lastly, we use a **Jaccard similarity threshold** of **0.7**, which proved to be efficient for both [SantaCoder](https://arxiv.org/abs/2301.03988) and [StarCoder](https://arxiv.org/abs/2305.06161) models. Such a high threshold
600
+ reduces false positives, leading to fewer unnecessary comparisons and lower computational overhead. Moreover, this standard threshold value has been shown to be [robust for duplicate detection](https://dl.acm.org/doi/10.1145/3359591.3359735).
601
+
602
+ Instead of removing near-duplicates, we introduce a new feature to our dataset, called *near_dups_stkv2_idx*. This feature is a list of IDs of the near-duplicate files from the Java-Stack v2 corresponding to the current file in our dataset.
603
+ The table below shows the number of files removed by each preprocessing method and the final number of files we are left with in the end (excluding near-duplicates).
604
+ Starting with **7.8 M** files, we are left with about **2.13 M** after applying all pre-processing methods (this includes near-duplicates).
605
+ Of the removed files, approximately **5.63 M** are exact duplicates (including about **0.87 M** from Java-Stack v2), and **0.8 M** are near-duplicates from Java-Stack v2.
606
+ This implies that training any LLM on Stack v2 will breach copy-left code licenses, despite the dataset creators’ claim that files under such licenses were removed.
607
+
608
+ ### Files removed by each pre-processing method
609
+ | **Method** | **#Files** |
610
+ | :--------: | :-------: |
611
+ | Raw dataset | 7.80 M |
612
+ | Auto-generated | 0.04 M |
613
+ | Exact-deduplication | 5.63 M |
614
+ | Near-deduplication | 0.80 M |
615
+ | Final dataset | 1.33 M |
616
+
617
+
618
+ # Usage
619
+
620
+ By default, the dataset includes near-duplicate entries from Java-Stack v2, with their IDs listed in the *near_dups_stkv2_idx* field.
621
+ *An entry with an empty list in this field indicates that no near-duplicate files were found in Java-Stack v2 for that specific file.*
622
+
623
+ Near-duplicates can be removed as shown in the example below.
624
+
625
+ ```python
626
+ from datasets import load_dataset
627
+
628
+ # Load dataset
629
+ dataset = load_dataset("LaughingLogits/Stackless_Java_V2")
630
+
631
+ # Load train split (only one split available)
632
+ dataset = load_dataset("LaughingLogits/Stackless_Java_V2", split="train")
633
+
634
+ # Dataset streaming
635
+ data = load_dataset("LaughingLogits/Stackless_Java_V2", split="train", streaming= True)
636
+ for sample in iter(data):
637
+ print(sample["content"])
638
+
639
+ # Filter dataset to not include near-duplicates from Java-Stack v2
640
+ dataset = load_dataset("LaughingLogits/Stackless_Java_V2", split="train")
641
+ near_deduplicated_dataset = dataset.filter(lambda sample: len(sample["near_dups_stkv2_idx"]) == 0)
642
+
643
+ ```