Update README.md
Browse files
README.md
CHANGED
|
@@ -2,4 +2,4 @@
|
|
| 2 |
license: mit
|
| 3 |
---
|
| 4 |
|
| 5 |
-
This dataset is a cleaned subset of the WikiText corpus for large language model pretraining. It has undergone preprocessing steps including cleaning, deduplication, normalization, and chunking to produce training-ready text segments. Due to its relatively small size, the dataset is intended for simple validation of text pretraining experiments rather than large-scale training.
|
|
|
|
| 2 |
license: mit
|
| 3 |
---
|
| 4 |
|
| 5 |
+
This dataset is a cleaned subset of the WikiText corpus (sourced from [Salesforce/wikitext](https://huggingface.co/datasets/Salesforce/wikitext)) for large language model pretraining. It has undergone preprocessing steps including cleaning, deduplication, normalization, and chunking to produce training-ready text segments. Due to its relatively small size, the dataset is intended for simple validation of text pretraining experiments rather than large-scale training.
|