Update README.md
Browse files
README.md
CHANGED
|
@@ -461,6 +461,10 @@ This represents only a subset of the final dataset (taking into account the new
|
|
| 461 |
|
| 462 |
We develop **The Heap**, a new contamination-free multilingual code dataset comprising 57 languages, which facilitates LLM evaluation reproducibility. The reproduction packge can be found [here](https://anonymous.4open.science/r/heap-forge/README.md).
|
| 463 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 464 |
# Collection
|
| 465 |
|
| 466 |
We collect up to **50,000** public repositories using the [GitHub API](https://docs.github.com/en/rest/search/search?apiVersion=2022-11-28), focusing on *license type*, *star count*, and *creation date*. Repositories with non-permissive licenses are prioritized to reduce contamination, as public code datasets we deduplicate against primarily focus on permissive or no-license repositories. We select repositories created before **April 2024** in decreasing order of their star counts. To handle GitHub rate limits, we use timeouts and pagination during the scraping process.
|
|
|
|
| 461 |
|
| 462 |
We develop **The Heap**, a new contamination-free multilingual code dataset comprising 57 languages, which facilitates LLM evaluation reproducibility. The reproduction packge can be found [here](https://anonymous.4open.science/r/heap-forge/README.md).
|
| 463 |
|
| 464 |
+
# Is your code in The Heap?
|
| 465 |
+
|
| 466 |
+
An opt-out mechanism will be provided for the final release of the dataset.
|
| 467 |
+
|
| 468 |
# Collection
|
| 469 |
|
| 470 |
We collect up to **50,000** public repositories using the [GitHub API](https://docs.github.com/en/rest/search/search?apiVersion=2022-11-28), focusing on *license type*, *star count*, and *creation date*. Repositories with non-permissive licenses are prioritized to reduce contamination, as public code datasets we deduplicate against primarily focus on permissive or no-license repositories. We select repositories created before **April 2024** in decreasing order of their star counts. To handle GitHub rate limits, we use timeouts and pagination during the scraping process.
|