Datasets:
File size: 4,464 Bytes
a41180b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 |
---
license: mit
task_categories:
- text-classification
- question-answering
- text-generation
language:
- en
size_categories:
- 10M<n<100M
---
# 📚 TinyWay-Gutenberg-Clean (Compressed Shards)
A large-scale, high-quality English text dataset derived from Project Gutenberg.
The corpus has been cleaned, normalized, deduplicated, segmented into fixed-length samples, and stored as compressed JSONL shards for efficient large-scale language model training.
This dataset is intended for pretraining and experimentation with small and medium language models such as **TinyWay**, tokenizer training, and large-scale NLP research.
---
## 📦 Dataset Overview
* **Name:** TinyWay-Gutenberg-Clean
* **Current Release:** ~19 compressed shards (`.jsonl.gz`)
* **Estimated Samples:** Tens of millions of text segments
* **Language:** English
* **Format:** Gzip-compressed JSON Lines (`.jsonl.gz`)
* **Source:** Project Gutenberg (public domain books)
* **License:** Public Domain
* **Maintainer:** Shivam (NNEngine / ITM AIR Lab)
Each record contains a clean text segment between **30 and 60 words**.
Future releases will scale this dataset further (e.g., 100M+ samples).
---
## Data Format
Each line is a JSON object:
```json
{
"id": "twg_000000012345",
"text": "Cleaned natural English text segment between thirty and sixty words.",
"word_count": 42,
"source": "gutenberg"
}
```
### Fields
| Field | Description |
| ------------ | ------------------------------ |
| `id` | Unique sample identifier |
| `text` | Clean English text segment |
| `word_count` | Number of words in the segment |
| `source` | Data source identifier |
---
## Data Processing Pipeline
The dataset was generated using a fully streaming pipeline to ensure scalability and low memory usage.
### Processing Steps
1. **Streaming Input**
* Text streamed from a Project Gutenberg mirror on Hugging Face.
2. **Text Cleaning**
* Removed Gutenberg headers and footers.
* Removed chapter titles, page numbers, and boilerplate text.
* Normalized whitespace and line breaks.
* Removed non-ASCII and control characters.
* Filtered malformed or extremely short segments.
3. **Segmentation**
* Text segmented into chunks of **30–60 words**.
4. **Validation**
* Enforced word count limits.
* Filtered invalid or noisy segments.
5. **Deduplication**
* Exact hash-based deduplication applied during generation.
6. **Compression & Sharding**
* Data stored as `.jsonl.gz` shards for efficient disk usage and streaming.
---
## How to Load the Dataset
### Using Hugging Face Datasets (Streaming)
```python
from datasets import load_dataset
dataset = load_dataset(
"NNEngine/TinyWay-Gutenberg-Clean",
split="train",
streaming=True
)
for i, sample in enumerate(dataset):
print(sample)
if i == 3:
break
```
---
### Reading a Shard Manually
```python
import gzip
import json
with gzip.open("train-00000.jsonl.gz", "rt", encoding="utf-8") as f:
for _ in range(3):
print(json.loads(next(f)))
```
---
## Dataset Characteristics (Approximate)
* **Average words per sample:** ~45
* **Style:** Literary and narrative English
* **Domain:** Fiction, non-fiction, historical texts
* **Vocabulary:** Large natural English vocabulary
* **Compression:** ~60–70% size reduction vs raw JSONL
Exact statistics may vary per shard and will be expanded in future releases.
---
## Limitations
* Primarily literary and historical language.
* No conversational chat data.
* No code or structured technical documentation.
* Some archaic vocabulary and sentence structures may appear.
* Deduplication is hash-based (near-duplicates may remain).
For conversational or web-style language modeling, this dataset should be mixed with complementary corpora.
---
## License
All source texts originate from Project Gutenberg and are in the **public domain**.
This processed dataset is released for unrestricted research and commercial use.
---
## Versioning & Roadmap
Planned future updates:
- Larger releases (target: 100M+ samples)
- Improved deduplication (near-duplicate filtering)
- Dataset statistics and analytics
- Additional language normalization
Each major release will be versioned clearly.
---
## Citation
If you use this dataset in research or publications, please cite:
```
TinyWay-Gutenberg-Clean
Shivam (NNEngine), 2026
``` |