Update README.md
Browse files
README.md
CHANGED
|
@@ -92,8 +92,6 @@ Each dataset was processed as follows:
|
|
| 92 |
After filtering, 100 000 chunks per source were included in the final dataset.
|
| 93 |
|
| 94 |
### Clustering
|
| 95 |
-
Note: The dataset clustering hasn't been performed yet for the latest version containing EssentialAI and FineFineWeb.
|
| 96 |
-
|
| 97 |
Agglomerative clustering was applied using embeddings from the [`Snowflake/snowflake-arctic-embed-xs`](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs)
|
| 98 |
model at multiple cluster counts: 100, 200, 500, 1 000, 2 000, 5 000, 10 000, 20 000, 50 000, 100 000, and 200 000 clusters, enabling flexible dataset configurations.
|
| 99 |
|
|
|
|
| 92 |
After filtering, 100 000 chunks per source were included in the final dataset.
|
| 93 |
|
| 94 |
### Clustering
|
|
|
|
|
|
|
| 95 |
Agglomerative clustering was applied using embeddings from the [`Snowflake/snowflake-arctic-embed-xs`](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs)
|
| 96 |
model at multiple cluster counts: 100, 200, 500, 1 000, 2 000, 5 000, 10 000, 20 000, 50 000, 100 000, and 200 000 clusters, enabling flexible dataset configurations.
|
| 97 |
|