Update README.md
Browse files
README.md
CHANGED
|
@@ -455,7 +455,7 @@ configs:
|
|
| 455 |
|
| 456 |
# Dataset Summary
|
| 457 |
|
| 458 |
-
We develop a new contamination-free multilingual code dataset that facilitates LLM evaluation reproducibility.
|
| 459 |
|
| 460 |
# Collection
|
| 461 |
|
|
@@ -464,23 +464,10 @@ We collect up to **50,000** public repositories using the [GitHub API](https://d
|
|
| 464 |
### Copyleft licenses included in the dataset
|
| 465 |
| **License** | **Family** |
|
| 466 |
| :--------: | :-------: |
|
| 467 |
-
| CECILL-1.0 | Weak Copyleft
|
| 468 |
-
|
|
| 469 |
-
|
|
| 470 |
-
|
| 471 |
-
| CECILL-C | Weak Copyleft |
|
| 472 |
-
| EPL-1.0 | Weak Copyleft |
|
| 473 |
-
| EPL-2.0 | Weak Copyleft |
|
| 474 |
-
| LGPL-2.1 | Weak Copyleft |
|
| 475 |
-
| LGPL-3.0 | Weak Copyleft |
|
| 476 |
-
| MS-RL | Weak Copyleft |
|
| 477 |
-
| MPL-2.0 | Weak Copyleft |
|
| 478 |
-
| GPL-2.0 | Strong Copyleft |
|
| 479 |
-
| GPL-3.0 | Strong Copyleft |
|
| 480 |
-
| AGPL-3.0 | Network Copyleft |
|
| 481 |
-
| EUPL-1.1 | Network Copyleft |
|
| 482 |
-
| EUPL-1.2 | Network Copyleft |
|
| 483 |
-
| OSL-3.0 | Network Copyleft |
|
| 484 |
|
| 485 |
The features we extract for each repository are illustrated in the example below.
|
| 486 |
|
|
@@ -524,7 +511,7 @@ The features we extract for each repository are illustrated in the example below
|
|
| 524 |
- **retrieval_date**: date when the repo was scraped from GitHub
|
| 525 |
|
| 526 |
We start by retrieving repositories with more than **900** stars using **two-month tumbling windows**. If we hit the **1000** repository limit per window (for a personal GitHub account), we shorten the
|
| 527 |
-
search space to a **one-month window** and restart the iteration. Otherwise, the window advances by two months. Once the timeframe until **April 2024** is covered, we reduce the star search space: between **900** and **100** stars, we decrease the interval by **50** (e.g. search between [900, 850]), between **100** and **10** stars, we decrease the interval by **10**, and for the last **10** stars, we decrease by **1**. Since most repositories fall within the **0-100 star range** (e.g. Figure 1 showcases the distribution of repositories with up to **500** stars for Java), using the **creation date** and **star count** filters helps us avoid API limits and scrape more data by narrowing the search space.
|
| 528 |
The creation date window can be reduced even further (week or day level), in order to extract more data. After retrieving the repositories, we extract all the files corresponding to each language. We extend the programming languages extension list used for [The Stack](https://gist.github.com/ppisarczyk/43962d06686722d26d176fad46879d41) with 4 languages: EJS, Raku, Starlark, and WebAssembly.
|
| 529 |
|
| 530 |
The final dataset structure is shown in the example below.
|
|
@@ -569,18 +556,19 @@ The final dataset structure is shown in the example below.
|
|
| 569 |
- **repo_created_at**: creation date of the file's repo
|
| 570 |
- **repo_pushed_at**: date of the most recent push to the file's repo until the extraction date
|
| 571 |
- **sha**: sha value of the file's content
|
| 572 |
-
- **
|
|
|
|
| 573 |
|
| 574 |
<!--  -->
|
| 575 |
<div style="text-align: center;">
|
| 576 |
<img src=https://cdn-uploads.huggingface.co/production/uploads/66a89f0fd6625ead0411af50/fctcChY0DRwxMeXazUWUV.png alt="Figure 1: Distribution of scraped repositories with at most 500 stars." style="display: block; margin: 0 auto; width: 600px; height: auto;" />
|
| 577 |
-
<p><b>Figure 1:</b> Distribution of scraped repositories with at most 500 stars
|
| 578 |
</div>
|
| 579 |
|
| 580 |
|
| 581 |
# Cleaning
|
| 582 |
|
| 583 |
-
The next stage in our dataset pipeline is the cleaning procedure. We exclude
|
| 584 |
|
| 585 |
# Deduplication
|
| 586 |
|
|
|
|
| 455 |
|
| 456 |
# Dataset Summary
|
| 457 |
|
| 458 |
+
We develop a new contamination-free multilingual code dataset that facilitates LLM evaluation reproducibility.
|
| 459 |
|
| 460 |
# Collection
|
| 461 |
|
|
|
|
| 464 |
### Copyleft licenses included in the dataset
|
| 465 |
| **License** | **Family** |
|
| 466 |
| :--------: | :-------: |
|
| 467 |
+
| CECILL-1.0, CECILL-1.1, CECILL-2.0, <br> CECILL-2.1, CECILL-C, EPL-1.0, EPL-2.0, <br> LGPL-2.1, LGPL-3.0, MS-RL, MPL-2.0 | Weak Copyleft |
|
| 468 |
+
| GPL-2.0, GPL-3.0 | Strong Copyleft |
|
| 469 |
+
| AGPL-3.0, EUPL-1.1, EUPL-1.2, OSL-3.0 | Network Copyleft |
|
| 470 |
+
<p><b>Table 1:</b> Copyleft licenses included in the dataset</p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 471 |
|
| 472 |
The features we extract for each repository are illustrated in the example below.
|
| 473 |
|
|
|
|
| 511 |
- **retrieval_date**: date when the repo was scraped from GitHub
|
| 512 |
|
| 513 |
We start by retrieving repositories with more than **900** stars using **two-month tumbling windows**. If we hit the **1000** repository limit per window (for a personal GitHub account), we shorten the
|
| 514 |
+
search space to a **one-month window** and restart the iteration. Otherwise, the window advances by two months. Once the entire timeframe (until **April 2024**) is covered, we reduce the star search space: between **900** and **100** stars, we decrease the interval by **50** (e.g. search between [900, 850]), between **100** and **10** stars, we decrease the interval by **10**, and for the last **10** stars, we decrease by **1**. Since most repositories fall within the **0-100 star range** (e.g. Figure 1 showcases the distribution of repositories with up to **500** stars for Java), using the **creation date** and **star count** filters helps us avoid API limits and scrape more data by narrowing the search space.
|
| 515 |
The creation date window can be reduced even further (week or day level), in order to extract more data. After retrieving the repositories, we extract all the files corresponding to each language. We extend the programming languages extension list used for [The Stack](https://gist.github.com/ppisarczyk/43962d06686722d26d176fad46879d41) with 4 languages: EJS, Raku, Starlark, and WebAssembly.
|
| 516 |
|
| 517 |
The final dataset structure is shown in the example below.
|
|
|
|
| 556 |
- **repo_created_at**: creation date of the file's repo
|
| 557 |
- **repo_pushed_at**: date of the most recent push to the file's repo until the extraction date
|
| 558 |
- **sha**: sha value of the file's content
|
| 559 |
+
- **exact_duplicates_pubdataset**: boolean flag stating if there are any exact duplicate files found against another public dataset (The Stackv2, The Stack, RedPajama, GithubCode, CodeParrot)
|
| 560 |
+
- **near_duplicates_pubdataset**: boolean flag stating if there are any near duplicate files found against another public dataset (The Stackv2, The Stack, RedPajama, GithubCode, CodeParrot)
|
| 561 |
|
| 562 |
<!--  -->
|
| 563 |
<div style="text-align: center;">
|
| 564 |
<img src=https://cdn-uploads.huggingface.co/production/uploads/66a89f0fd6625ead0411af50/fctcChY0DRwxMeXazUWUV.png alt="Figure 1: Distribution of scraped repositories with at most 500 stars." style="display: block; margin: 0 auto; width: 600px; height: auto;" />
|
| 565 |
+
<p><b>Figure 1:</b> Distribution of scraped repositories with at most 500 stars for Java</p>
|
| 566 |
</div>
|
| 567 |
|
| 568 |
|
| 569 |
# Cleaning
|
| 570 |
|
| 571 |
+
The next stage in our dataset pipeline is the cleaning procedure. We exclude any files **larger than 10 MB** and those with **fewer than 10 words**.
|
| 572 |
|
| 573 |
# Deduplication
|
| 574 |
|