LaughingLogits commited on
Commit
b1a4b97
·
verified ·
1 Parent(s): 32c806d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -175
README.md CHANGED
@@ -85,181 +85,7 @@ configs:
85
  ---
86
 
87
  # Dataset Summary
88
-
89
- We create a new Java dataset by scraping public repositories on GitHub. Our file-level dataset includes individual Java files rather than entire projects or code snippets such as functions or class definitions. Our approach combines techniques used in creating the [Stack](https://huggingface.co/bigcode) dataset family, which served as the training foundation for the [StarCoder](https://huggingface.co/bigcode) models. We specifically focus on the [Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2) for its latest release and public availability. Our dataset creation pipeline involves three key stages: collection, cleaning, and deduplication.
90
-
91
- # Collection
92
-
93
- We start the collection process by scraping **10,500** public repositories using the [GitHub API](https://docs.github.com/en/rest/search/search?apiVersion=2022-11-28). By selecting an extra **500** repositories we can ensure that we collected at least **10,000** repositories as some repositories can be deleted/made private between fetching the list of repositories, and the actual downloading of the content. We specifically look for repositories released under a strong copyleft license such as **GPL-2.0**, **GPL-3.0**, or **AGPL-3.0**. We use copyleft licenses to ensure our dataset is not contaminated with training data from Stack v2. This issue occurred with other publicly available file-level code datasets, including Stack v1, which claimed to contain only permissively licensed code, however, they were [contaminated with copyleft-licensed code](https://dl.acm.org/doi/10.1145/3650105.3652298). Stack v2 also [claims to exclude copyleft-licensed code](https://arxiv.org/abs/2402.19173) due to community stance uncertainty and its low volume. Nevertheless, we still deduplicated our dataset against Stack v2 to ensure there was no overlap and that our data was safe for training.
94
- We extract repositories **created** up until **April** **2024** in **decreasing order** of their **star counts**. To avoid **GitHub rate limits**, we use **timeouts** and **pagination** to fetch the repositories.
95
- The search is based on the **repository license type**, **star count**, and **creation date**.
96
-
97
- The features we extract for each repository are illustrated in the example below.
98
-
99
- ```json
100
- {
101
- "id": 126178683,
102
- "full_name": "halo-dev/halo",
103
- "html_url": "https://github.com/halo-dev/halo",
104
- "stargazers_count": 29115,
105
- "forks_count": 8985,
106
- "watchers_count": 29115,
107
- "open_issues_count": 278,
108
- "language": "Java",
109
- "created_at": "2018-03-21T12:56:52Z",
110
- "pushed_at": "2023-10-28T16:29:39Z",
111
- "license": {
112
- "key": "gpl-3.0",
113
- "name": "GNU General Public License v3.0",
114
- "spdx_id": "GPL-3.0",
115
- "url": "https://api.github.com/licenses/gpl-3.0",
116
- "node_id": "MDc6TGljZW5zZTk="
117
- },
118
- "retrieval_date": "10/30/2023, 3:24:57 PM (Europe/Amsterdam)"
119
- }
120
-
121
- ```
122
-
123
- ### Repository Fields
124
-
125
- - **id**: unique id of the repo
126
- - **full_name**: complete name of the repo
127
- - **html_url**: URL to the repo
128
- - **stargazers_count**: number of stars of the repo
129
- - **forks_count**: number of forks of the repo
130
- - **watchers_count**: number of watchers of the repo
131
- - **open_issues_count**: number of open issues of the repo at the extraction time
132
- - **language**: main language of the repo
133
- - **created_at**: creation date of the repo
134
- - **pushed_at**: date of the most recent push to the repo until the extraction date
135
- - **license**: license type of the repo
136
- - **retrieval_date**: date when the repo was scraped from GitHub
137
-
138
- We start by retrieving repositories with more than **900** stars using **two-month tumbling windows**. If we hit the **1000** repository limit per window (for a personal GitHub account), we shorten the
139
- search space to a **one-month window** and restart the iteration. Otherwise, the window advances by two months. Once the timeframe until April 2024 is covered, we reduce the star search space: between **900** and **100** stars, we decrease the interval by **50** (e.g. search between [900, 850]), between **100** and **10** stars, we decrease the interval by **10**, and for the last **10** stars, we decrease by **1**. Figure 1 showcases the distribution of repositories with up to **500** stars. Since most repositories fall within the **0-100 star range**, using the **creation date** and **star count** filters helps us avoid API limits and scrape more data by narrowing the search space.
140
- Although the creation date window can be reduced even further (week or day level), a one-month window was enough for our needs. After retrieving the repositories, we extract all the files with a
141
- *.java* extension.
142
-
143
- The final dataset structure is shown in the example below.
144
-
145
- ```json
146
- {
147
- "file_name": "Font.java",
148
- "file_path": ".../lateralgm/resources/Font.java",
149
- "content": "*/ package org.lateralgm.resources; import java.util.EnumMap; import org.lateralgm.main.Prefs; ...",
150
- "file_size": 1,985,
151
- "language": "Java",
152
- "extension": ".java",
153
- "repo_name": "lwizchz/GameMaker-HTML5-Player",
154
- "repo_stars": 22,
155
- "repo_forks": 9,
156
- "repo_open_issues": 0,
157
- "repo_created_at": "2011-09-10T16:05:20Z",
158
- "repo_pushed_at": "2013-05-06T23:00:17Z",
159
- "sha": "00046809b218b2c058f4be7...",
160
- "near_dups_stkv2_idx": [21192944, 106219595]
161
- }
162
-
163
- ```
164
-
165
- ### Dataset Fields
166
-
167
- - **file_name**: name of the file extracted from its repo
168
- - **file_path**: path to the file in its repo
169
- - **content**: content of the file
170
- - **file_size**: size of the file
171
- - **language**: language of the file
172
- - **extension**: language extension of the file
173
- - **repo_name**: complete name of the file's repo
174
- - **repo_stars**: number of stars of the file's repo
175
- - **repo_forks**: number of forks of the file's repo
176
- - **repo_open_issues**: number of open issues of the file's repo at the extraction date
177
- - **repo_created_at**: creation date of the file's repo
178
- - **repo_pushed_at**: date of the most recent push to the file's repo until the extraction date
179
- - **sha**: sha value of the file's content
180
- - **near_dups_stkv2_idx**: IDs of files from Java-Stack v2 that are near-duplicates to the current file
181
-
182
- <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66a89f0fd6625ead0411af50/fctcChY0DRwxMeXazUWUV.png) -->
183
- <div style="text-align: center;">
184
- <img src=https://cdn-uploads.huggingface.co/production/uploads/66a89f0fd6625ead0411af50/fctcChY0DRwxMeXazUWUV.png alt="Figure 1: Distribution of scraped repositories with at most 500 stars." style="display: block; margin: 0 auto; width: 600px; height: auto;" />
185
- <p><b>Figure 1:</b> Distribution of scraped repositories with at most 500 stars.</p>
186
- </div>
187
-
188
-
189
- # Cleaning
190
-
191
- The next stage in our dataset pipeline is the cleaning procedure. We exclude Java files **larger than 50 MB** and those with **fewer than 10 words**.
192
- We also remove auto-generated files by searching for specific keywords in the [first 5 lines of each file](https://huggingface.co/datasets/bigcode/the-stack-v2). If any of these keywords occur, the file is considered
193
- auto-generated and removed:
194
- - *generated by*
195
- - *autogenerated*
196
- - *auto-generated*
197
- - *this file was generated*
198
- - *this file is generated*
199
- - *generated automatically*
200
- - *automatically generated*
201
-
202
- These keywords were derived from the Stack v2 approach and manual file inspection.
203
-
204
- # Deduplication
205
-
206
- The final stage of our dataset pipeline is the deduplication process. Firstly, we remove any potential duplicated repositories obtained due to the pagination process. We then perform **exact deduplication** between **our dataset and the Java-Stack v2**, and **within our dataset itself**, using the **sha256** function to generate hashes for each file.
207
- We choose this hash function because it provides a uniform distribution of hash values, minimizes collisions, and ensures even distribution across the hash space.
208
- For **near-deduplication**, we use the **MinHashLSH** algorithm from the [*datasketch1*](https://ekzhu.com/datasketch/lsh.html) library. To calculate the minhashes, we use the same hash function as above, but we extract the first 16 bytes to generate 128-bit hash values.
209
- This approach balances the need for a strong hash function with the efficiency of a shorter hash length.
210
-
211
- Additionally, we use **128** file permutations for LSH, with weights of **0.4** for **precision** and **0.6** for **recall**. We generate **7-character shingles** after [lowercasing the file content and removing whitespace](http://infolab.stanford.edu/~ullman/mmds/book.pdf).
212
- We find that 7-shingles provide a reasonable trade-off between the number of shingles and the data processed, being small enough to keep the number of unique shingles manageable yet large enough to provide meaningful comparisons.
213
- It was shown that the number of shingles should be large enough to ensure a low probability of shingles appearing across documents, with **k = 5** suggested for smaller documents such as [emails](http://infolab.stanford.edu/~ullman/mmds/book.pdf).
214
- However, Java files usually contain a **larger dictionary** of characters than emails, including arithmetic and comparison operators which are less frequent in emails.
215
- Thus, given the increased **complexity** and **size** of Java files, we consider 7-shingles to be appropriate to capture sufficient context, ensuring uniqueness and **reducing false positives**, which smaller shingles such as k = 5 might fail to achieve.
216
- Furthermore, **k = 9** was shown to be a safe choice for [large research articles](http://infolab.stanford.edu/~ullman/mmds/book.pdf), however, for our needs, 7-shingles strike a balance between accuracy and
217
- computational efficiency, crucial for handling the **Java-Stack v2’s size** of over **222 M** files. This choice provides better computational efficiency by reducing the number of comparisons while maintaining a manageable shingle space.
218
- Lastly, we use a **Jaccard similarity threshold** of **0.7**, which proved to be efficient for both [SantaCoder](https://arxiv.org/abs/2301.03988) and [StarCoder](https://arxiv.org/abs/2305.06161) models. Such a high threshold
219
- reduces false positives, leading to fewer unnecessary comparisons and lower computational overhead. Moreover, this standard threshold value has been shown to be [robust for duplicate detection](https://dl.acm.org/doi/10.1145/3359591.3359735).
220
-
221
- Instead of removing near-duplicates, we introduce a new feature to our dataset, called *near_dups_stkv2_idx*. This feature is a list of IDs of the near-duplicate files from the Java-Stack v2 corresponding to the current file in our dataset.
222
- The table below shows the number of files removed by each preprocessing method and the final number of files we are left with in the end (excluding near-duplicates).
223
- Starting with **7.8 M** files, we are left with about **2.13 M** after applying all pre-processing methods (this includes near-duplicates).
224
- Of the removed files, approximately **5.63 M** are exact duplicates (including about **0.87 M** from Java-Stack v2), and **0.8 M** are near-duplicates from Java-Stack v2.
225
- This implies that training any LLM on Stack v2 will breach copy-left code licenses, despite the dataset creators’ claim that files under such licenses were removed.
226
-
227
- ### Files removed by each pre-processing method
228
- | **Method** | **#Files** |
229
- | :--------: | :-------: |
230
- | Raw dataset | 7.80 M |
231
- | Auto-generated | 0.04 M |
232
- | Exact-deduplication | 5.63 M |
233
- | Near-deduplication | 0.80 M |
234
- | Final dataset | 1.33 M |
235
-
236
-
237
- # Usage
238
-
239
- By default, the dataset includes near-duplicate entries from Java-Stack v2, with their IDs listed in the *near_dups_stkv2_idx* field.
240
- *An entry with an empty list in this field indicates that no near-duplicate files were found in Java-Stack v2 for that specific file.*
241
-
242
- Near-duplicates can be removed as shown in the example below.
243
-
244
- ```python
245
- from datasets import load_dataset
246
-
247
- # Load dataset
248
- dataset = load_dataset("LaughingLogits/Stackless_Java_V2")
249
-
250
- # Load train split (only one split available)
251
- dataset = load_dataset("LaughingLogits/Stackless_Java_V2", split="train")
252
-
253
- # Dataset streaming
254
- data = load_dataset("LaughingLogits/Stackless_Java_V2", split="train", streaming= True)
255
- for sample in iter(data):
256
- print(sample["content"])
257
-
258
- # Filter dataset to not include near-duplicates from Java-Stack v2
259
- dataset = load_dataset("LaughingLogits/Stackless_Java_V2", split="train")
260
- near_deduplicated_dataset = dataset.filter(lambda sample: len(sample["near_dups_stkv2_idx"]) == 0)
261
-
262
- ```
263
 
264
 
265
 
 
85
  ---
86
 
87
  # Dataset Summary
88
+ This is the dataset used for the training of the AP-MAE models, it is a subset of The Heap, we release it for reproducability.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
 
90
 
91