wikitext_cleaned / README.md
chenzhe0000's picture
add description
cacf490 verified
metadata
license: mit

Dataset Card for Cleaned & Chunked WikiText Corpus

πŸ“Œ Dataset Description

This dataset is a cleaned and chunked version of WikiText-style corpus, designed for language model pretraining and evaluation.

The preprocessing pipeline follows a lightweight data curation paradigm:

cleaning β†’ deduplication β†’ normalization β†’ token-aware chunking

Specifically, the dataset includes:

  • Removal of special and noisy characters
  • Cleaning of formatting artifacts (e.g., HTML tags, irregular symbols)
  • Deduplication to reduce redundant or highly similar text samples
  • Text normalization (whitespace, encoding, etc.)
  • Chunking of long documents into smaller segments suitable for model training

Each sample corresponds to a cleaned text chunk, rather than a full original document.


πŸ“Š Data Structure

Each sample in the dataset follows this structure:

{
  "uid": "string",
  "content": "string",
  "meta_data": {
    "index": "int",
    "total": "int",
    "length": "int"
  }
}