Wikipedia_Cleaned / README.md
adityasasidhar's picture
Update README.md
8181090 verified
metadata
language:
  - en
license: cc-by-sa-4.0
size_categories:
  - 10M<n<100M
task_categories:
  - text-generation
  - fill-mask
pretty_name: Wikipedia English Cleaned
tags:
  - wikipedia
  - english
  - language-modeling

Wikipedia English Cleaned

Dataset Description

A cleaned English Wikipedia text corpus suitable for language model training. This dataset contains English Wikipedia articles processed and cleaned for use in training small language models.

Dataset Summary

  • Language: English
  • Size: ~133 MB (plain text)
  • Format: Plain text (.txt)
  • License: CC-BY-SA 4.0 (Wikipedia content license)

Source Data

The dataset is derived from English Wikipedia articles, cleaned and formatted for language model training.

Dataset Structure

Data Fields

The dataset consists of plain text files containing Wikipedia articles, with one article or paragraph per line.

Data Splits

This dataset is provided as a single text file without predefined splits. Users can create their own train/validation/test splits as needed.

Usage

Loading the Dataset

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("adityasasidhar/Wikipedia_Cleaned")

# Or load directly as text
from datasets import load_dataset
dataset = load_dataset("adityasasidhar/Wikipedia_Cleaned", split="train")

Example Use Case

This dataset was used to train a Small Language Model (SLM) with the following characteristics:

  • Model Size: 15.58M parameters
  • Architecture: Decoder-only Transformer
  • Training: Combined with TinyStories dataset for ~100M tokens total

Dataset Creation

Curation Rationale

This dataset was created to provide clean, factual English text for training small language models. Wikipedia provides high-quality, encyclopedic content that helps models learn proper grammar, factual knowledge, and formal writing style.

Source Data

  • Source: English Wikipedia
  • Processing: Cleaned and formatted for language model training
  • Quality: High-quality encyclopedic content

Considerations for Using the Data

Social Impact

This dataset contains factual, encyclopedic content from Wikipedia. Users should be aware that:

  • Wikipedia content reflects the biases and perspectives of its editors
  • The dataset is suitable for general language modeling tasks
  • For specific domains, additional fine-tuning may be necessary

Limitations

  • The dataset represents a snapshot of Wikipedia at a specific point in time
  • May not include the most recent information
  • Content is limited to English language articles

Additional Information

Licensing Information

The dataset is released under the CC-BY-SA 4.0 license, consistent with Wikipedia's content license.

Citation Information

If you use this dataset, please cite:

@misc{wikipedia_cleaned_2026,
  author = {Aditya Sasidhar},
  title = {Wikipedia English Cleaned},
  year = {2026},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/adityasasidhar/Wikipedia_Cleaned}}
}

Contributions

Dataset curated and uploaded by Aditya Sasidhar.