File size: 3,209 Bytes
8181090
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
language:
- en
license: cc-by-sa-4.0
size_categories:
- 10M<n<100M
task_categories:
- text-generation
- fill-mask
pretty_name: Wikipedia English Cleaned
tags:
- wikipedia
- english
- language-modeling
---

# Wikipedia English Cleaned

## Dataset Description

A cleaned English Wikipedia text corpus suitable for language model training. This dataset contains English Wikipedia articles processed and cleaned for use in training small language models.

### Dataset Summary

- **Language**: English
- **Size**: ~133 MB (plain text)
- **Format**: Plain text (.txt)
- **License**: CC-BY-SA 4.0 (Wikipedia content license)

### Source Data

The dataset is derived from English Wikipedia articles, cleaned and formatted for language model training.

## Dataset Structure

### Data Fields

The dataset consists of plain text files containing Wikipedia articles, with one article or paragraph per line.

### Data Splits

This dataset is provided as a single text file without predefined splits. Users can create their own train/validation/test splits as needed.

## Usage

### Loading the Dataset

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("adityasasidhar/Wikipedia_Cleaned")

# Or load directly as text
from datasets import load_dataset
dataset = load_dataset("adityasasidhar/Wikipedia_Cleaned", split="train")
```

### Example Use Case

This dataset was used to train a Small Language Model (SLM) with the following characteristics:
- **Model Size**: 15.58M parameters
- **Architecture**: Decoder-only Transformer
- **Training**: Combined with TinyStories dataset for ~100M tokens total

## Dataset Creation

### Curation Rationale

This dataset was created to provide clean, factual English text for training small language models. Wikipedia provides high-quality, encyclopedic content that helps models learn proper grammar, factual knowledge, and formal writing style.

### Source Data

- **Source**: English Wikipedia
- **Processing**: Cleaned and formatted for language model training
- **Quality**: High-quality encyclopedic content

## Considerations for Using the Data

### Social Impact

This dataset contains factual, encyclopedic content from Wikipedia. Users should be aware that:
- Wikipedia content reflects the biases and perspectives of its editors
- The dataset is suitable for general language modeling tasks
- For specific domains, additional fine-tuning may be necessary

### Limitations

- The dataset represents a snapshot of Wikipedia at a specific point in time
- May not include the most recent information
- Content is limited to English language articles

## Additional Information

### Licensing Information

The dataset is released under the **CC-BY-SA 4.0** license, consistent with Wikipedia's content license.

### Citation Information

If you use this dataset, please cite:

```bibtex
@misc{wikipedia_cleaned_2026,
  author = {Aditya Sasidhar},
  title = {Wikipedia English Cleaned},
  year = {2026},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/adityasasidhar/Wikipedia_Cleaned}}
}
```

### Contributions

Dataset curated and uploaded by [Aditya Sasidhar](https://github.com/adityasasidhar).