Datasets:

Modalities:
Tabular
Text
Languages:
English
ArXiv:
DOI:
License:

Capitalize "English" on dataset card.

#75
by MihaiPopa-1 - opened
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -509,7 +509,7 @@ configs:
509
 
510
  ## What is it?
511
 
512
- The 🍷 FineWeb dataset consists of more than **18.5T tokens** (originally 15T tokens) of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the 🏭 [`datatrove`](https://github.com/huggingface/datatrove/) library, our large scale data processing library.
513
 
514
  🍷 FineWeb was originally meant to be a fully open replication of πŸ¦… [RefinedWeb](https://huggingface.co/papers/2306.01116), with a release of the **full dataset** under the **ODC-By 1.0 license**. However, by carefully adding additional filtering steps, we managed to push the performance of 🍷 FineWeb well above that of the original πŸ¦… RefinedWeb, and models trained on our dataset also outperform models trained on other commonly used high quality web datasets (like C4, Dolma-v1.6, The Pile, SlimPajama, RedPajam2) on our aggregate group of [benchmark tasks](https://huggingface.co/datasets/HuggingFaceFW/fineweb/blob/main/lighteval_tasks.py).
515
 
 
509
 
510
  ## What is it?
511
 
512
+ The 🍷 FineWeb dataset consists of more than **18.5T tokens** (originally 15T tokens) of cleaned and deduplicated English web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the 🏭 [`datatrove`](https://github.com/huggingface/datatrove/) library, our large scale data processing library.
513
 
514
  🍷 FineWeb was originally meant to be a fully open replication of πŸ¦… [RefinedWeb](https://huggingface.co/papers/2306.01116), with a release of the **full dataset** under the **ODC-By 1.0 license**. However, by carefully adding additional filtering steps, we managed to push the performance of 🍷 FineWeb well above that of the original πŸ¦… RefinedWeb, and models trained on our dataset also outperform models trained on other commonly used high quality web datasets (like C4, Dolma-v1.6, The Pile, SlimPajama, RedPajam2) on our aggregate group of [benchmark tasks](https://huggingface.co/datasets/HuggingFaceFW/fineweb/blob/main/lighteval_tasks.py).
515