Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
json
Languages:
English
Size:
10M - 100M
License:
File size: 4,082 Bytes
459e405 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 |
---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- text-generation
- causal
- training
- transformers
- pytorch
- jsonl
- segmentation
- validation
size_categories:
- 10M<n<100M
---
# 📚 TinyWay-Gutenberg-Clean-40M
A large-scale, high-quality English text dataset derived from Project Gutenberg, cleaned, normalized, deduplicated, and segmented into fixed-length samples for efficient language model pretraining.
This dataset is designed to support training small and medium language models such as **TinyWay**, tokenizer training, embedding models, and large-scale NLP experimentation.
---
## Dataset Overview
* **Name:** TinyWay-Gutenberg-Clean-40M
* **Samples:** ~40,000,000
* **Language:** English
* **Format:** JSONL (optionally gzip-compressed)
* **Source:** Project Gutenberg (public domain books)
* **License:** Public Domain
* **Intended Use:** Language model pretraining, tokenizer training, representation learning
Each line in the dataset contains a clean text segment between **30 and 60 words**.
---
## Data Format
Each record is stored as a JSON object:
```json
{
"id": "twg_000000000123",
"text": "Cleaned text segment of natural English language between thirty and sixty words.",
"word_count": 42,
"source": "gutenberg"
}
```
### Fields
| Field | Description |
| ------------ | ----------------------------- |
| `id` | Unique sample identifier |
| `text` | Clean English text segment |
| `word_count` | Number of words in the sample |
| `source` | Data source identifier |
---
## Data Processing Pipeline
The dataset was generated using a fully streaming pipeline to ensure scalability and low memory usage.
### Steps
1. **Streaming Input**
* Data loaded from a Project Gutenberg mirror using Hugging Face streaming APIs.
2. **Text Cleaning**
* Removed Gutenberg headers and footers
* Removed chapter titles and page numbers
* Normalized whitespace and line breaks
* Removed non-ASCII and control characters
* Removed URLs and artifacts
3. **Segmentation**
* Text split into fixed segments of **30–60 words**.
4. **Validation**
* Enforced word count constraints
* Filtered short or malformed segments
5. **Deduplication**
* Exact hash-based deduplication applied during generation.
6. **Output**
* Stored as JSONL files (optionally gzip-compressed).
* Sharded for easier distribution and loading.
---
## How to Load the Dataset
### Using Hugging Face Datasets
```python
from datasets import load_dataset
dataset = load_dataset(
"NNEngine/TinyWay-Gutenberg-Clean-40M",
split="train",
streaming=True
)
for sample in dataset.take(3):
print(sample)
```
---
### Reading JSONL Manually
```python
import json
with open("data/train-00000.jsonl", "r", encoding="utf-8") as f:
for _ in range(3):
print(json.loads(next(f)))
```
If files are compressed:
```python
import gzip
import json
with gzip.open("train-00000.jsonl.gz", "rt", encoding="utf-8") as f:
for _ in range(3):
print(json.loads(next(f)))
```
---
## Dataset Characteristics
Approximate properties:
* **Average words per sample:** ~45
* **Vocabulary:** Large natural English vocabulary
* **Style:** Literary and narrative English
* **Domain:** Fiction, non-fiction, historical texts
---
## Limitations
* Content is primarily literary and historical in nature.
* No conversational, chat, or code data.
* Some archaic vocabulary and sentence structure may appear.
* Deduplication is hash-based (near-duplicates may remain).
For conversational or modern web text, additional datasets should be mixed.
---
## License
All source texts originate from Project Gutenberg and are in the **public domain**.
This processed dataset is released for unrestricted research and commercial use.
---
## Citation
If you use this dataset in research or publications, please cite:
```
TinyWay-Gutenberg-Clean-40M
NNEngine, 2026
```
---
## 🧠 Maintainer
Created and maintained by **Shivam Sharma** |