Improve dataset card with task category and Github link
#5
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,6 +1,13 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
| 4 |
<p align="center">
|
| 5 |
📑 <a href="https://arxiv.org/abs/2503.00808" target="_blank">Paper</a>    |    🔨 <a href="https://huggingface.co/hkust-nlp/preselect-fasttext-classifier" target="_blank">fastText Classifier</a>    |    🤗 <a href="https://huggingface.co/datasets/hkust-nlp/PreSelect-100B" target="_blank">Released Dataset</a>    |    📦 <a href="https://github.com/hkust-nlp/PreSelect" target="_blank">Repo</a>
|
| 6 |
<br>
|
|
@@ -10,7 +17,7 @@ PreSelect-100B is a curated ~100B token pretraining dataset that achieves great
|
|
| 10 |
It is filtered by [PreSelect-Classifier](https://huggingface.co/hkust-nlp/PreSelect-classifier) at 10% threshold, where the pool is a randomly sampled subset of [DCLM-refinedweb](https://data.commoncrawl.org/contrib/datacomp/DCLM-refinedweb/index.html), which is a cleaned version of Common Crawl raw data but without any model-based filtering.
|
| 11 |
|
| 12 |
### Benchmark results
|
| 13 |
-
|
| 14 |
|
| 15 |

|
| 16 |
|
|
@@ -24,6 +31,4 @@ If you find this work helpful, please kindly cite as:
|
|
| 24 |
year={2025},
|
| 25 |
eprint={2503.00808},
|
| 26 |
}
|
| 27 |
-
```
|
| 28 |
-
|
| 29 |
-
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-generation
|
| 5 |
+
tags:
|
| 6 |
+
- data-selection
|
| 7 |
+
- pretraining
|
| 8 |
+
- efficient-training
|
| 9 |
---
|
| 10 |
+
|
| 11 |
<p align="center">
|
| 12 |
📑 <a href="https://arxiv.org/abs/2503.00808" target="_blank">Paper</a>    |    🔨 <a href="https://huggingface.co/hkust-nlp/preselect-fasttext-classifier" target="_blank">fastText Classifier</a>    |    🤗 <a href="https://huggingface.co/datasets/hkust-nlp/PreSelect-100B" target="_blank">Released Dataset</a>    |    📦 <a href="https://github.com/hkust-nlp/PreSelect" target="_blank">Repo</a>
|
| 13 |
<br>
|
|
|
|
| 17 |
It is filtered by [PreSelect-Classifier](https://huggingface.co/hkust-nlp/PreSelect-classifier) at 10% threshold, where the pool is a randomly sampled subset of [DCLM-refinedweb](https://data.commoncrawl.org/contrib/datacomp/DCLM-refinedweb/index.html), which is a cleaned version of Common Crawl raw data but without any model-based filtering.
|
| 18 |
|
| 19 |
### Benchmark results
|
| 20 |
+
Training using the PreSelect curated dataset achieves superior results than other dataset selection methods on various downstream tasks, as shown in the comparison below.
|
| 21 |
|
| 22 |

|
| 23 |
|
|
|
|
| 31 |
year={2025},
|
| 32 |
eprint={2503.00808},
|
| 33 |
}
|
| 34 |
+
```
|
|
|
|
|
|