WizzF commited on
Commit
9d94849
·
verified ·
1 Parent(s): 4706613

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -63
README.md CHANGED
@@ -453,9 +453,9 @@ configs:
453
  path: data/Python/train-*
454
  ---
455
 
456
- # Dataset Summary
457
 
458
- We develop a new contamination-free multilingual code dataset that facilitates LLM evaluation reproducibility.
459
 
460
  # Collection
461
 
@@ -468,7 +468,6 @@ We collect up to **50,000** public repositories using the [GitHub API](https://d
468
  | GPL-2.0, GPL-3.0 | Strong Copyleft |
469
  | AGPL-3.0, EUPL-1.1, EUPL-1.2, OSL-3.0 | Network Copyleft |
470
 
471
- Table 1: Copyleft licenses included in the dataset
472
 
473
  The features we extract for each repository are illustrated in the example below.
474
 
@@ -513,7 +512,54 @@ The features we extract for each repository are illustrated in the example below
513
 
514
  We start by retrieving repositories with more than **900** stars using **two-month tumbling windows**. If we hit the **1000** repository limit per window (for a personal GitHub account), we shorten the
515
  search space to a **one-month window** and restart the iteration. Otherwise, the window advances by two months. Once the entire timeframe (until **April 2024**) is covered, we reduce the star search space: between **900** and **100** stars, we decrease the interval by **50** (e.g. search between [900, 850]), between **100** and **10** stars, we decrease the interval by **10**, and for the last **10** stars, we decrease by **1**. Since most repositories fall within the **0-100 star range** (e.g. Figure 1 showcases the distribution of repositories with up to **500** stars for Java), using the **creation date** and **star count** filters helps us avoid API limits and scrape more data by narrowing the search space.
516
- The creation date window can be reduced even further (week or day level), in order to extract more data. After retrieving the repositories, we extract all the files corresponding to each language. We extend the programming languages extension list used for [The Stack](https://gist.github.com/ppisarczyk/43962d06686722d26d176fad46879d41) with 4 languages: EJS, Raku, Starlark, and WebAssembly.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
517
 
518
  The final dataset structure is shown in the example below.
519
 
@@ -560,73 +606,24 @@ The final dataset structure is shown in the example below.
560
  - **exact_duplicates_pubdataset**: boolean flag stating if there are any exact duplicate files found against another public dataset (The Stackv2, The Stack, RedPajama, GithubCode, CodeParrot)
561
  - **near_duplicates_pubdataset**: boolean flag stating if there are any near duplicate files found against another public dataset (The Stackv2, The Stack, RedPajama, GithubCode, CodeParrot)
562
 
563
- <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66a89f0fd6625ead0411af50/fctcChY0DRwxMeXazUWUV.png) -->
564
- <div style="text-align: center;">
565
- <img src=https://cdn-uploads.huggingface.co/production/uploads/66a89f0fd6625ead0411af50/fctcChY0DRwxMeXazUWUV.png alt="Figure 1: Distribution of scraped repositories with at most 500 stars." style="display: block; margin: 0 auto; width: 600px; height: auto;" />
566
- <p><b>Figure 1:</b> Distribution of scraped repositories with at most 500 stars for Java</p>
567
- </div>
568
-
569
-
570
- # Cleaning
571
-
572
- The next stage in our dataset pipeline is the cleaning procedure. We exclude any files **larger than 10 MB** and those with **fewer than 10 words**.
573
-
574
- # Deduplication
575
-
576
- The final stage of our dataset pipeline is the deduplication process. Firstly, we remove any potential duplicated repositories obtained due to the pagination process. We then perform **exact deduplication** between **our dataset and the Java-Stack v2**, and **within our dataset itself**, using the **sha256** function to generate hashes for each file.
577
- We choose this hash function because it provides a uniform distribution of hash values, minimizes collisions, and ensures even distribution across the hash space.
578
- For **near-deduplication**, we use the **MinHashLSH** algorithm from the [*datasketch1*](https://ekzhu.com/datasketch/lsh.html) library. To calculate the minhashes, we use the same hash function as above, but we extract the first 16 bytes to generate 128-bit hash values.
579
- This approach balances the need for a strong hash function with the efficiency of a shorter hash length.
580
-
581
- Additionally, we use **128** file permutations for LSH, with weights of **0.4** for **precision** and **0.6** for **recall**. We generate **7-character shingles** after [lowercasing the file content and removing whitespace](http://infolab.stanford.edu/~ullman/mmds/book.pdf).
582
- We find that 7-shingles provide a reasonable trade-off between the number of shingles and the data processed, being small enough to keep the number of unique shingles manageable yet large enough to provide meaningful comparisons.
583
- It was shown that the number of shingles should be large enough to ensure a low probability of shingles appearing across documents, with **k = 5** suggested for smaller documents such as [emails](http://infolab.stanford.edu/~ullman/mmds/book.pdf).
584
- However, Java files usually contain a **larger dictionary** of characters than emails, including arithmetic and comparison operators which are less frequent in emails.
585
- Thus, given the increased **complexity** and **size** of Java files, we consider 7-shingles to be appropriate to capture sufficient context, ensuring uniqueness and **reducing false positives**, which smaller shingles such as k = 5 might fail to achieve.
586
- Furthermore, **k = 9** was shown to be a safe choice for [large research articles](http://infolab.stanford.edu/~ullman/mmds/book.pdf), however, for our needs, 7-shingles strike a balance between accuracy and
587
- computational efficiency, crucial for handling the **Java-Stack v2’s size** of over **222 M** files. This choice provides better computational efficiency by reducing the number of comparisons while maintaining a manageable shingle space.
588
- Lastly, we use a **Jaccard similarity threshold** of **0.7**, which proved to be efficient for both [SantaCoder](https://arxiv.org/abs/2301.03988) and [StarCoder](https://arxiv.org/abs/2305.06161) models. Such a high threshold
589
- reduces false positives, leading to fewer unnecessary comparisons and lower computational overhead. Moreover, this standard threshold value has been shown to be [robust for duplicate detection](https://dl.acm.org/doi/10.1145/3359591.3359735).
590
-
591
- Instead of removing near-duplicates, we introduce a new feature to our dataset, called *near_dups_stkv2_idx*. This feature is a list of IDs of the near-duplicate files from the Java-Stack v2 corresponding to the current file in our dataset.
592
- The table below shows the number of files removed by each preprocessing method and the final number of files we are left with in the end (excluding near-duplicates).
593
- Starting with **7.8 M** files, we are left with about **2.13 M** after applying all pre-processing methods (this includes near-duplicates).
594
- Of the removed files, approximately **5.63 M** are exact duplicates (including about **0.87 M** from Java-Stack v2), and **0.8 M** are near-duplicates from Java-Stack v2.
595
- This implies that training any LLM on Stack v2 will breach copy-left code licenses, despite the dataset creators’ claim that files under such licenses were removed.
596
-
597
- ### Files removed by each pre-processing method
598
- | **Method** | **#Files** |
599
- | :--------: | :-------: |
600
- | Raw dataset | 7.80 M |
601
- | Auto-generated | 0.04 M |
602
- | Exact-deduplication | 5.63 M |
603
- | Near-deduplication | 0.80 M |
604
- | Final dataset | 1.33 M |
605
-
606
 
607
  # Usage
608
 
609
- By default, the dataset includes near-duplicate entries from Java-Stack v2, with their IDs listed in the *near_dups_stkv2_idx* field.
610
- *An entry with an empty list in this field indicates that no near-duplicate files were found in Java-Stack v2 for that specific file.*
611
-
612
- Near-duplicates can be removed as shown in the example below.
613
 
614
  ```python
615
  from datasets import load_dataset
616
 
617
- # Load dataset
618
- dataset = load_dataset("LaughingLogits/Stackless_Java_V2")
619
-
620
- # Load train split (only one split available)
621
- dataset = load_dataset("LaughingLogits/Stackless_Java_V2", split="train")
622
 
623
- # Dataset streaming
624
- data = load_dataset("LaughingLogits/Stackless_Java_V2", split="train", streaming= True)
625
- for sample in iter(data):
626
- print(sample["content"])
 
 
627
 
628
- # Filter dataset to not include near-duplicates from Java-Stack v2
629
- dataset = load_dataset("LaughingLogits/Stackless_Java_V2", split="train")
630
- near_deduplicated_dataset = dataset.filter(lambda sample: len(sample["near_dups_stkv2_idx"]) == 0)
631
 
632
  ```
 
453
  path: data/Python/train-*
454
  ---
455
 
456
+ # The Heap Dataset
457
 
458
+ We develop **The Heap**, a new contamination-free multilingual code dataset comprising 50 languages, that facilitates LLM evaluation reproducibility.
459
 
460
  # Collection
461
 
 
468
  | GPL-2.0, GPL-3.0 | Strong Copyleft |
469
  | AGPL-3.0, EUPL-1.1, EUPL-1.2, OSL-3.0 | Network Copyleft |
470
 
 
471
 
472
  The features we extract for each repository are illustrated in the example below.
473
 
 
512
 
513
  We start by retrieving repositories with more than **900** stars using **two-month tumbling windows**. If we hit the **1000** repository limit per window (for a personal GitHub account), we shorten the
514
  search space to a **one-month window** and restart the iteration. Otherwise, the window advances by two months. Once the entire timeframe (until **April 2024**) is covered, we reduce the star search space: between **900** and **100** stars, we decrease the interval by **50** (e.g. search between [900, 850]), between **100** and **10** stars, we decrease the interval by **10**, and for the last **10** stars, we decrease by **1**. Since most repositories fall within the **0-100 star range** (e.g. Figure 1 showcases the distribution of repositories with up to **500** stars for Java), using the **creation date** and **star count** filters helps us avoid API limits and scrape more data by narrowing the search space.
515
+ The creation date window can be reduced even further (week or day level), in order to extract more data. We remove any potential duplicated repositories obtained due to the pagination process. Lastly, we extract all the files corresponding to each language. We extend the programming languages extension list used for [The Stack](https://gist.github.com/ppisarczyk/43962d06686722d26d176fad46879d41) with 4 languages: EJS, Raku, Starlark, and WebAssembly.
516
+
517
+
518
+ <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66a89f0fd6625ead0411af50/fctcChY0DRwxMeXazUWUV.png) -->
519
+ <div style="text-align: center;">
520
+ <img src=https://cdn-uploads.huggingface.co/production/uploads/66a89f0fd6625ead0411af50/fctcChY0DRwxMeXazUWUV.png alt="Figure 1: Distribution of scraped repositories with at most 500 stars." style="display: block; margin: 0 auto; width: 600px; height: auto;" />
521
+ <p><b>Figure 1:</b> Distribution of scraped repositories with at most 500 stars for Java</p>
522
+ </div>
523
+
524
+
525
+ # Cleaning
526
+
527
+ The next stage in our dataset pipeline is the cleaning procedure. We exclude any files **larger than 10 MB** and those with **fewer than 10 words**.
528
+
529
+ # Deduplication
530
+
531
+ The final stage of our dataset pipeline is the deduplication process. We apply both exact and near deduplication against open code datasets listed in the table below.
532
+
533
+ ### Open code dataset used for deduplication
534
+ | **Dataset** | **Size** |
535
+ | :--------: | :-------: |
536
+ | The Stack V2 | 67.5 TB |
537
+ | The Stack | 6.4 TB |
538
+ | Red Pajama | 2.67 TB |
539
+ | GitHub Code | 1 TB |
540
+ | CodeParrot | 180 GB |
541
+
542
+ ## Exact Deduplication
543
+ We remove exact duplicates **within our dataset itself**, and then we apply exact deduplication against the open datasets. For that, we use the **sha256** function to generate hashes for each file.
544
+ We choose this hash function because it provides a uniform distribution of hash values, minimizes collisions, and ensures even distribution across the hash space.
545
+
546
+ ## Near Deduplication
547
+
548
+ We apply the **MinHashLSH** algorithm using the [*datasketch1*](https://ekzhu.com/datasketch/lsh.html) library. To calculate the minhashes, we use the same hash function as above, but we extract the first 16 bytes to generate 128-bit hash values.
549
+ This approach balances the need for a strong hash function with the efficiency of a shorter hash length.
550
+
551
+ Additionally, we use **128** file permutations for LSH, with weights of **0.4** for **precision** and **0.6** for **recall**. We generate **7-character shingles** after [lowercasing the file content and removing whitespace](http://infolab.stanford.edu/~ullman/mmds/book.pdf).
552
+ We find that 7-shingles provide a reasonable trade-off between the number of shingles and the data processed, being small enough to keep the number of unique shingles manageable yet large enough to provide meaningful comparisons.
553
+ It was shown that the number of shingles should be large enough to ensure a low probability of shingles appearing across documents, with **k = 5** suggested for smaller documents such as [emails](http://infolab.stanford.edu/~ullman/mmds/book.pdf).
554
+ However, code files usually contain a **larger dictionary** of characters than emails, including arithmetic and comparison operators which are less frequent in emails.
555
+ Thus, given the increased **complexity** and **size** of code files, we consider 7-shingles to be appropriate to capture sufficient context, ensuring uniqueness and **reducing false positives**, which smaller shingles such as k = 5 might fail to achieve.
556
+ Furthermore, **k = 9** was shown to be a safe choice for [large research articles](http://infolab.stanford.edu/~ullman/mmds/book.pdf), however, for our needs, 7-shingles strike a balance between accuracy and
557
+ computational efficiency, crucial for handling the extensive size of the datasets. This choice provides better computational efficiency by reducing the number of comparisons while maintaining a manageable shingle space.
558
+ Lastly, we use a **Jaccard similarity threshold** of **0.7**, which proved to be efficient for both [SantaCoder](https://arxiv.org/abs/2301.03988) and [StarCoder](https://arxiv.org/abs/2305.06161) models. A high threshold
559
+ reduces false positives, leading to fewer unnecessary comparisons and lower computational overhead. Moreover, this standard threshold value has been shown to be [robust for duplicate detection](https://dl.acm.org/doi/10.1145/3359591.3359735).
560
+
561
+
562
+ Instead of removing exact and near duplicates found against other open datasets, we add a boolean mask to our dataset. This approach enhances reproducibility by allowing researchers to filter the dataset for unique files, according to their specific requirements.
563
 
564
  The final dataset structure is shown in the example below.
565
 
 
606
  - **exact_duplicates_pubdataset**: boolean flag stating if there are any exact duplicate files found against another public dataset (The Stackv2, The Stack, RedPajama, GithubCode, CodeParrot)
607
  - **near_duplicates_pubdataset**: boolean flag stating if there are any near duplicate files found against another public dataset (The Stackv2, The Stack, RedPajama, GithubCode, CodeParrot)
608
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
609
 
610
  # Usage
611
 
612
+ Using the Datasets API, our dataset can be used as follows:
 
 
 
613
 
614
  ```python
615
  from datasets import load_dataset
616
 
617
+ dataset_name = 'redpajama'
618
+ language = 'Python'
 
 
 
619
 
620
+ ds = load_dataset(
621
+ "WizzF/Heap-Forge",
622
+ f"{language}",
623
+ split="train",
624
+ num_proc=16
625
+ )
626
 
627
+ ds = ds.filter(lambda x: not x[f'exact_duplicates_{dataset_name}'] and not x[f'near_duplicates_{dataset_name}'])
 
 
628
 
629
  ```