NNEngine commited on
Commit
a41180b
·
verified ·
1 Parent(s): 422e21d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +185 -3
README.md CHANGED
@@ -1,3 +1,185 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-classification
5
+ - question-answering
6
+ - text-generation
7
+ language:
8
+ - en
9
+ size_categories:
10
+ - 10M<n<100M
11
+ ---
12
+
13
+ # 📚 TinyWay-Gutenberg-Clean (Compressed Shards)
14
+
15
+ A large-scale, high-quality English text dataset derived from Project Gutenberg.
16
+ The corpus has been cleaned, normalized, deduplicated, segmented into fixed-length samples, and stored as compressed JSONL shards for efficient large-scale language model training.
17
+
18
+ This dataset is intended for pretraining and experimentation with small and medium language models such as **TinyWay**, tokenizer training, and large-scale NLP research.
19
+
20
+ ---
21
+
22
+ ## 📦 Dataset Overview
23
+
24
+ * **Name:** TinyWay-Gutenberg-Clean
25
+ * **Current Release:** ~19 compressed shards (`.jsonl.gz`)
26
+ * **Estimated Samples:** Tens of millions of text segments
27
+ * **Language:** English
28
+ * **Format:** Gzip-compressed JSON Lines (`.jsonl.gz`)
29
+ * **Source:** Project Gutenberg (public domain books)
30
+ * **License:** Public Domain
31
+ * **Maintainer:** Shivam (NNEngine / ITM AIR Lab)
32
+
33
+ Each record contains a clean text segment between **30 and 60 words**.
34
+
35
+ Future releases will scale this dataset further (e.g., 100M+ samples).
36
+
37
+ ---
38
+
39
+ ## Data Format
40
+
41
+ Each line is a JSON object:
42
+
43
+ ```json
44
+ {
45
+ "id": "twg_000000012345",
46
+ "text": "Cleaned natural English text segment between thirty and sixty words.",
47
+ "word_count": 42,
48
+ "source": "gutenberg"
49
+ }
50
+ ```
51
+
52
+ ### Fields
53
+
54
+ | Field | Description |
55
+ | ------------ | ------------------------------ |
56
+ | `id` | Unique sample identifier |
57
+ | `text` | Clean English text segment |
58
+ | `word_count` | Number of words in the segment |
59
+ | `source` | Data source identifier |
60
+
61
+ ---
62
+
63
+ ## Data Processing Pipeline
64
+
65
+ The dataset was generated using a fully streaming pipeline to ensure scalability and low memory usage.
66
+
67
+ ### Processing Steps
68
+
69
+ 1. **Streaming Input**
70
+
71
+ * Text streamed from a Project Gutenberg mirror on Hugging Face.
72
+
73
+ 2. **Text Cleaning**
74
+
75
+ * Removed Gutenberg headers and footers.
76
+ * Removed chapter titles, page numbers, and boilerplate text.
77
+ * Normalized whitespace and line breaks.
78
+ * Removed non-ASCII and control characters.
79
+ * Filtered malformed or extremely short segments.
80
+
81
+ 3. **Segmentation**
82
+
83
+ * Text segmented into chunks of **30–60 words**.
84
+
85
+ 4. **Validation**
86
+
87
+ * Enforced word count limits.
88
+ * Filtered invalid or noisy segments.
89
+
90
+ 5. **Deduplication**
91
+
92
+ * Exact hash-based deduplication applied during generation.
93
+
94
+ 6. **Compression & Sharding**
95
+
96
+ * Data stored as `.jsonl.gz` shards for efficient disk usage and streaming.
97
+
98
+ ---
99
+
100
+ ## How to Load the Dataset
101
+
102
+ ### Using Hugging Face Datasets (Streaming)
103
+
104
+ ```python
105
+ from datasets import load_dataset
106
+
107
+ dataset = load_dataset(
108
+ "NNEngine/TinyWay-Gutenberg-Clean",
109
+ split="train",
110
+ streaming=True
111
+ )
112
+
113
+ for i, sample in enumerate(dataset):
114
+ print(sample)
115
+ if i == 3:
116
+ break
117
+ ```
118
+
119
+ ---
120
+
121
+ ### Reading a Shard Manually
122
+
123
+ ```python
124
+ import gzip
125
+ import json
126
+
127
+ with gzip.open("train-00000.jsonl.gz", "rt", encoding="utf-8") as f:
128
+ for _ in range(3):
129
+ print(json.loads(next(f)))
130
+ ```
131
+
132
+ ---
133
+
134
+ ## Dataset Characteristics (Approximate)
135
+
136
+ * **Average words per sample:** ~45
137
+ * **Style:** Literary and narrative English
138
+ * **Domain:** Fiction, non-fiction, historical texts
139
+ * **Vocabulary:** Large natural English vocabulary
140
+ * **Compression:** ~60–70% size reduction vs raw JSONL
141
+
142
+ Exact statistics may vary per shard and will be expanded in future releases.
143
+
144
+ ---
145
+
146
+ ## Limitations
147
+
148
+ * Primarily literary and historical language.
149
+ * No conversational chat data.
150
+ * No code or structured technical documentation.
151
+ * Some archaic vocabulary and sentence structures may appear.
152
+ * Deduplication is hash-based (near-duplicates may remain).
153
+
154
+ For conversational or web-style language modeling, this dataset should be mixed with complementary corpora.
155
+
156
+ ---
157
+
158
+ ## License
159
+
160
+ All source texts originate from Project Gutenberg and are in the **public domain**.
161
+ This processed dataset is released for unrestricted research and commercial use.
162
+
163
+ ---
164
+
165
+ ## Versioning & Roadmap
166
+
167
+ Planned future updates:
168
+
169
+ - Larger releases (target: 100M+ samples)
170
+ - Improved deduplication (near-duplicate filtering)
171
+ - Dataset statistics and analytics
172
+ - Additional language normalization
173
+
174
+ Each major release will be versioned clearly.
175
+
176
+ ---
177
+
178
+ ## Citation
179
+
180
+ If you use this dataset in research or publications, please cite:
181
+
182
+ ```
183
+ TinyWay-Gutenberg-Clean
184
+ Shivam (NNEngine), 2026
185
+ ```