Update README.md
Browse files
README.md
CHANGED
|
@@ -6,10 +6,10 @@ language:
|
|
| 6 |
# DCLM-Edu
|
| 7 |
|
| 8 |
## Description
|
| 9 |
-
This is a filtered version of [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) dataset using FineWeb-Edu educational quality [classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier). We annotate each web page based on the educational quality
|
| 10 |
on a scale from 0 to 5 and only keep samples with a score higher than 2. This dataset is intended for language models training and was used to train [SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) and [SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M).
|
| 11 |
|
| 12 |
-
**_Note:_** As show in
|
| 13 |
|
| 14 |
## How to use
|
| 15 |
### Using `datasets`
|
|
|
|
| 6 |
# DCLM-Edu
|
| 7 |
|
| 8 |
## Description
|
| 9 |
+
This is a filtered version of the [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) dataset using FineWeb-Edu educational quality [classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier). We annotate each web page based on the educational quality
|
| 10 |
on a scale from 0 to 5 and only keep samples with a score higher than 2. This dataset is intended for language models training and was used to train [SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) and [SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M).
|
| 11 |
|
| 12 |
+
**_Note:_** As show in the performance section, we find that further filtering the dataset to only keep **samples with `edu_int_score>=3` yields even better downstream performance when training small laguage models**. We include score 2 samples to allow for rebalancing and added diversity, but you can filter the dataset with `datasets` or `datatrove` as shown below.
|
| 13 |
|
| 14 |
## How to use
|
| 15 |
### Using `datasets`
|