agentlans commited on
Commit
5ed6334
·
verified ·
1 Parent(s): bcfe1b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -77,7 +77,9 @@ A curated collection of English-language texts for AI training and research.
77
  ### Sources
78
  - [HuggingFaceFW/fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
79
  - [openbmb/Ultra-FineWeb](https://huggingface.co/datasets/openbmb/Ultra-FineWeb)
80
- - [Zyphra/Zyda-2](https://huggingface.co/datasets/Zyphra/Zyda-2)
 
 
81
 
82
  Each dataset was processed as follows:
83
 
@@ -90,6 +92,8 @@ Each dataset was processed as follows:
90
  After filtering, 100 000 chunks per source were included in the final dataset.
91
 
92
  ### Clustering
 
 
93
  Agglomerative clustering was applied using embeddings from the [`Snowflake/snowflake-arctic-embed-xs`](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs)
94
  model at multiple cluster counts: 100, 200, 500, 1 000, 2 000, 5 000, 10 000, 20 000, 50 000, 100 000, and 200 000 clusters, enabling flexible dataset configurations.
95
 
 
77
  ### Sources
78
  - [HuggingFaceFW/fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
79
  - [openbmb/Ultra-FineWeb](https://huggingface.co/datasets/openbmb/Ultra-FineWeb)
80
+ - [Zyphra/Zyda-2](https://huggingface.co/datasets/Zyphra/Zyda-2)
81
+ - [EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample](https://huggingface.co/datasets/EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample)
82
+ - [m-a-p/FineFineWeb](https://huggingface.co/datasets/m-a-p/FineFineWeb)
83
 
84
  Each dataset was processed as follows:
85
 
 
92
  After filtering, 100 000 chunks per source were included in the final dataset.
93
 
94
  ### Clustering
95
+ Note: The dataset clustering hasn't been performed yet for the latest version containing EssentialAI and FineFineWeb.
96
+
97
  Agglomerative clustering was applied using embeddings from the [`Snowflake/snowflake-arctic-embed-xs`](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs)
98
  model at multiple cluster counts: 100, 200, 500, 1 000, 2 000, 5 000, 10 000, 20 000, 50 000, 100 000, and 200 000 clusters, enabling flexible dataset configurations.
99