chenzhe0000 commited on
Commit
b4d4460
·
verified ·
1 Parent(s): 28d9995

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -2,4 +2,4 @@
2
  license: mit
3
  ---
4
 
5
- This dataset is a cleaned subset of the WikiText corpus for large language model pretraining. It has undergone preprocessing steps including cleaning, deduplication, normalization, and chunking to produce training-ready text segments. Due to its relatively small size, the dataset is intended for simple validation of text pretraining experiments rather than large-scale training.
 
2
  license: mit
3
  ---
4
 
5
+ This dataset is a cleaned subset of the WikiText corpus (sourced from [Salesforce/wikitext](https://huggingface.co/datasets/Salesforce/wikitext)) for large language model pretraining. It has undergone preprocessing steps including cleaning, deduplication, normalization, and chunking to produce training-ready text segments. Due to its relatively small size, the dataset is intended for simple validation of text pretraining experiments rather than large-scale training.