LaughingLogits commited on
Commit
dfa5571
·
verified ·
1 Parent(s): d1cfc5f

update readme file

Browse files
Files changed (1) hide show
  1. README.md +12 -3
README.md CHANGED
@@ -50,7 +50,7 @@ We create a new Java dataset by scraping public repositories on GitHub. Our file
50
  # Collection
51
 
52
  We start the collection process by scraping **10500** public repositories using the [GitHub API](https://docs.github.com/en/rest/search/search?apiVersion=2022-11-28). We specifically look for repositories released under a strong copyleft license such as **GPL-2.0**, **GPL-3.0**, or **AGPL-3.0**. We use copyleft licenses to ensure our dataset is not contaminated with training data from Stack v2. This issue occurred with other publicly available file-level code datasets, including Stack v1, which claimed to contain only permissively licensed code, however, they were [contaminated with copyleft-licensed code](https://dl.acm.org/doi/10.1145/3650105.3652298). Stack v2 also [claims to exclude copyleft-licensed code](https://arxiv.org/abs/2402.19173) due to community stance uncertainty and its low volume. Nevertheless, we still deduplicated our dataset against Stack v2 to ensure there was no overlap and that our data was safe for training.
53
- We extract repositories **created** between **2001** and **2023** in **decreasing order** of their **star counts**. To avoid **GitHub rate limits**, we use **timeouts** and **pagination** to fetch the repositories.
54
  The search is based on the **repository license type**, **star count**, and **creation date**.
55
 
56
  The features we extract for each repository are illustrated in the example below.
@@ -95,7 +95,7 @@ The features we extract for each repository are illustrated in the example below
95
  - **retrieval_date**: date when the repo was scraped from GitHub
96
 
97
  We start by retrieving repositories with more than **900** stars using **two-month tumbling windows**. If we hit the **1000** repository limit per window (for a personal GitHub account), we shorten the
98
- search space to a **one-month window** and restart the iteration. Otherwise, the window advances by two months. Once the 2001-2023 timeframe is covered, we reduce the star search space: between **900** and **100** stars, we decrease the interval by **50** (e.g. search between [900, 850]), between **100** and **10** stars, we decrease the interval by **10**, and for the last **10** stars, we decrease by **1**. Figure 1 showcases the distribution of repositories with up to **500** stars. Since most repositories fall within the **0-100 star range**, using the **creation date** and **star count** filters helps us avoid API limits and scrape more data by narrowing the search space.
99
  Although the creation date window can be reduced even further (week or day level), a one-month window was enough for our needs. After retrieving the repositories, we extract all the files with a
100
  *.java* extension.
101
 
@@ -177,7 +177,7 @@ computational efficiency, crucial for handling the **Java-Stack v2’s size** of
177
  Lastly, we use a **Jaccard similarity threshold** of **0.7**, which proved to be efficient for both [SantaCoder](https://arxiv.org/abs/2301.03988) and [StarCoder](https://arxiv.org/abs/2305.06161) models. Such a high threshold
178
  reduces false positives, leading to fewer unnecessary comparisons and lower computational overhead.
179
 
180
- Instead of removing near-duplicates, we introduce a new feature to our dataset, called *near_dups_stkv2_idx*. This feature includes the IDs of the near-duplicate files from the Java-Stack v2 corresponding to the current file in our dataset.
181
  The table below shows the number of files removed by each preprocessing method and the final number of files we are left with in the end (excluding near-duplicates).
182
  Starting with **7.8 M** files, we are left with about **2.13 M** after applying all pre-processing methods (this includes near-duplicates).
183
  Of the removed files, approximately **5.63 M** are exact duplicates (including about **0.87 M** from Java-Stack v2) and **0.8 M** are near-duplicates from Java-Stack v2.
@@ -195,6 +195,11 @@ This implies that training any LLM on Stack v2 will breach copy-left code licens
195
 
196
  # Usage
197
 
 
 
 
 
 
198
  ```python
199
  from datasets import load_dataset
200
 
@@ -209,6 +214,10 @@ data = load_dataset("LaughingLogits/Stackless_Java_V2", split="train", streaming
209
  for sample in iter(data):
210
  print(sample["content"])
211
 
 
 
 
 
212
  ```
213
 
214
 
 
50
  # Collection
51
 
52
  We start the collection process by scraping **10500** public repositories using the [GitHub API](https://docs.github.com/en/rest/search/search?apiVersion=2022-11-28). We specifically look for repositories released under a strong copyleft license such as **GPL-2.0**, **GPL-3.0**, or **AGPL-3.0**. We use copyleft licenses to ensure our dataset is not contaminated with training data from Stack v2. This issue occurred with other publicly available file-level code datasets, including Stack v1, which claimed to contain only permissively licensed code, however, they were [contaminated with copyleft-licensed code](https://dl.acm.org/doi/10.1145/3650105.3652298). Stack v2 also [claims to exclude copyleft-licensed code](https://arxiv.org/abs/2402.19173) due to community stance uncertainty and its low volume. Nevertheless, we still deduplicated our dataset against Stack v2 to ensure there was no overlap and that our data was safe for training.
53
+ We extract repositories **created** between **2001** and **April** **2024** in **decreasing order** of their **star counts**. To avoid **GitHub rate limits**, we use **timeouts** and **pagination** to fetch the repositories.
54
  The search is based on the **repository license type**, **star count**, and **creation date**.
55
 
56
  The features we extract for each repository are illustrated in the example below.
 
95
  - **retrieval_date**: date when the repo was scraped from GitHub
96
 
97
  We start by retrieving repositories with more than **900** stars using **two-month tumbling windows**. If we hit the **1000** repository limit per window (for a personal GitHub account), we shorten the
98
+ search space to a **one-month window** and restart the iteration. Otherwise, the window advances by two months. Once the 2001-2024 timeframe is covered, we reduce the star search space: between **900** and **100** stars, we decrease the interval by **50** (e.g. search between [900, 850]), between **100** and **10** stars, we decrease the interval by **10**, and for the last **10** stars, we decrease by **1**. Figure 1 showcases the distribution of repositories with up to **500** stars. Since most repositories fall within the **0-100 star range**, using the **creation date** and **star count** filters helps us avoid API limits and scrape more data by narrowing the search space.
99
  Although the creation date window can be reduced even further (week or day level), a one-month window was enough for our needs. After retrieving the repositories, we extract all the files with a
100
  *.java* extension.
101
 
 
177
  Lastly, we use a **Jaccard similarity threshold** of **0.7**, which proved to be efficient for both [SantaCoder](https://arxiv.org/abs/2301.03988) and [StarCoder](https://arxiv.org/abs/2305.06161) models. Such a high threshold
178
  reduces false positives, leading to fewer unnecessary comparisons and lower computational overhead.
179
 
180
+ Instead of removing near-duplicates, we introduce a new feature to our dataset, called *near_dups_stkv2_idx*. This feature is a list of IDs of the near-duplicate files from the Java-Stack v2 corresponding to the current file in our dataset.
181
  The table below shows the number of files removed by each preprocessing method and the final number of files we are left with in the end (excluding near-duplicates).
182
  Starting with **7.8 M** files, we are left with about **2.13 M** after applying all pre-processing methods (this includes near-duplicates).
183
  Of the removed files, approximately **5.63 M** are exact duplicates (including about **0.87 M** from Java-Stack v2) and **0.8 M** are near-duplicates from Java-Stack v2.
 
195
 
196
  # Usage
197
 
198
+ By default, the dataset includes near-duplicate entries from Java-Stack v2, with their IDs listed in the *near_dups_stkv2_idx* field.
199
+ *An entry with an empty list in this field indicates that no near-duplicate files were found in Java-Stack v2 for that specific file.*
200
+
201
+ Near-duplicates can be removed as shown in the example below.
202
+
203
  ```python
204
  from datasets import load_dataset
205
 
 
214
  for sample in iter(data):
215
  print(sample["content"])
216
 
217
+ # Filter dataset to not include near-duplicates from Java-Stack v2
218
+ dataset = load_dataset("LaughingLogits/Stackless_Java_V2", split="train")
219
+ near_deduplicated_dataset = dataset.filter(lambda sample: len(sample["near_dups_stkv2_idx"]) == 0)
220
+
221
  ```
222
 
223