jeremyrmanning commited on
Commit
151ead9
·
verified ·
1 Parent(s): da7742a

Upload twain complete works corpus

Browse files
Files changed (7) hide show
  1. 1837.txt +0 -0
  2. 3176.txt +0 -0
  3. 3177.txt +0 -0
  4. 74.txt +0 -0
  5. 76.txt +0 -0
  6. 86.txt +0 -0
  7. README.md +269 -0
1837.txt ADDED
The diff for this file is too large to render. See raw diff
 
3176.txt ADDED
The diff for this file is too large to render. See raw diff
 
3177.txt ADDED
The diff for this file is too large to render. See raw diff
 
74.txt ADDED
The diff for this file is too large to render. See raw diff
 
76.txt ADDED
The diff for this file is too large to render. See raw diff
 
86.txt ADDED
The diff for this file is too large to render. See raw diff
 
README.md ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ task_categories:
5
+ - text-generation
6
+ tags:
7
+ - stylometry
8
+ - authorship-attribution
9
+ - literary-analysis
10
+ - twain
11
+ - classic-literature
12
+ - project-gutenberg
13
+ size_categories:
14
+ - n<1K
15
+ pretty_name: Mark Twain Complete Works
16
+ ---
17
+
18
+ # Mark Twain Complete Works Corpus
19
+
20
+ <div style="text-align: center;">
21
+ <img src="https://cdn-avatars.huggingface.co/v1/production/uploads/1654865912089-62a33fd71424f432574c348b.png" alt="ContextLab" width="100"/>
22
+ </div>
23
+
24
+ ## Dataset Description
25
+
26
+ This dataset contains the complete works of **Mark Twain** (1835-1910), preprocessed for computational stylometry research. The texts were sourced from [Project Gutenberg](https://www.gutenberg.org/) and cleaned for use in the paper ["A Stylometric Application of Large Language Models"](https://github.com/ContextLab/llm-stylometry) (Stropkay et al., 2025).
27
+
28
+ The corpus includes **6 books** by Mark Twain, including Adventures of Huckleberry Finn, The Adventures of Tom Sawyer, A Connecticut Yankee. All text has been converted to **lowercase** and cleaned of Project Gutenberg headers, footers, and chapter headings to focus on the author's prose style.
29
+
30
+ ### Quick Stats
31
+
32
+ - **Books:** 6
33
+ - **Total characters:** 3,918,002
34
+ - **Total words:** 715,977 (approximate)
35
+ - **Average book length:** 653,000 characters
36
+ - **Format:** Plain text (.txt files)
37
+ - **Language:** English (lowercase)
38
+
39
+ ## Dataset Structure
40
+
41
+ ### Books Included
42
+
43
+ Each `.txt` file contains the complete text of one book:
44
+
45
+ | File | Title |
46
+ |------|-------|
47
+ | `1837.txt` | The Prince and the Pauper |
48
+ | `3176.txt` | The Innocents Abroad |
49
+ | `3177.txt` | Roughing It |
50
+ | `74.txt` | The Adventures of Tom Sawyer, Complete |
51
+ | `76.txt` | Adventures of Huckleberry Finn |
52
+ | `86.txt` | A Connecticut Yankee in King Arthur's Court |
53
+
54
+
55
+ ### Data Fields
56
+
57
+ - **text:** Complete book text (lowercase, cleaned)
58
+ - **filename:** Project Gutenberg ID
59
+
60
+ ### Data Format
61
+
62
+ All files are plain UTF-8 text:
63
+ - Lowercase characters only
64
+ - Punctuation and structure preserved
65
+ - Paragraph breaks maintained
66
+ - No chapter headings or non-narrative text
67
+
68
+ ## Usage
69
+
70
+ ### Load with `datasets` library
71
+
72
+ ```python
73
+ from datasets import load_dataset
74
+
75
+ # Load entire corpus
76
+ corpus = load_dataset("contextlab/twain-corpus")
77
+
78
+ # Iterate through books
79
+ for book in corpus['train']:
80
+ print(f"Book length: {len(book['text']):,} characters")
81
+ print(book['text'][:200]) # First 200 characters
82
+ print()
83
+ ```
84
+
85
+ ### Load specific file
86
+
87
+ ```python
88
+ # Load single book by filename
89
+ dataset = load_dataset(
90
+ "contextlab/twain-corpus",
91
+ data_files="54.txt" # Specific Gutenberg ID
92
+ )
93
+
94
+ text = dataset['train'][0]['text']
95
+ print(f"Loaded {len(text):,} characters")
96
+ ```
97
+
98
+ ### Download files directly
99
+
100
+ ```python
101
+ from huggingface_hub import hf_hub_download
102
+
103
+ # Download one book
104
+ file_path = hf_hub_download(
105
+ repo_id="contextlab/twain-corpus",
106
+ filename="54.txt",
107
+ repo_type="dataset"
108
+ )
109
+
110
+ with open(file_path, 'r') as f:
111
+ text = f.read()
112
+ ```
113
+
114
+ ### Use for training language models
115
+
116
+ ```python
117
+ from datasets import load_dataset
118
+ from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
119
+
120
+ # Load corpus
121
+ corpus = load_dataset("contextlab/twain-corpus")
122
+
123
+ # Combine all books into single text
124
+ full_text = " ".join([book['text'] for book in corpus['train']])
125
+
126
+ # Tokenize
127
+ tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
128
+
129
+ def tokenize_function(examples):
130
+ return tokenizer(examples['text'], truncation=True, max_length=1024)
131
+
132
+ tokenized = corpus.map(tokenize_function, batched=True, remove_columns=['text'])
133
+
134
+ # Initialize model
135
+ model = GPT2LMHeadModel.from_pretrained("gpt2")
136
+
137
+ # Set up training
138
+ training_args = TrainingArguments(
139
+ output_dir="./results",
140
+ num_train_epochs=10,
141
+ per_device_train_batch_size=8,
142
+ save_steps=1000,
143
+ )
144
+
145
+ # Train
146
+ trainer = Trainer(
147
+ model=model,
148
+ args=training_args,
149
+ train_dataset=tokenized['train']
150
+ )
151
+
152
+ trainer.train()
153
+ ```
154
+
155
+ ### Analyze text statistics
156
+
157
+ ```python
158
+ from datasets import load_dataset
159
+ import numpy as np
160
+
161
+ corpus = load_dataset("contextlab/twain-corpus")
162
+
163
+ # Calculate statistics
164
+ lengths = [len(book['text']) for book in corpus['train']]
165
+
166
+ print(f"Books: {len(lengths)}")
167
+ print(f"Total characters: {sum(lengths):,}")
168
+ print(f"Mean length: {np.mean(lengths):,.0f} characters")
169
+ print(f"Std length: {np.std(lengths):,.0f} characters")
170
+ print(f"Min length: {min(lengths):,} characters")
171
+ print(f"Max length: {max(lengths):,} characters")
172
+ ```
173
+
174
+ ## Dataset Creation
175
+
176
+ ### Source Data
177
+
178
+ All texts sourced from [Project Gutenberg](https://www.gutenberg.org/), a library of over 70,000 free eBooks in the public domain.
179
+
180
+ **Project Gutenberg Links:**
181
+ - Books identified by Gutenberg ID numbers (filenames)
182
+ - Example: `54.txt` corresponds to https://www.gutenberg.org/ebooks/54
183
+ - All works are in the public domain
184
+
185
+ ### Preprocessing Pipeline
186
+
187
+ The raw Project Gutenberg texts underwent the following preprocessing:
188
+
189
+ 1. **Header/footer removal:** Project Gutenberg license text and metadata removed
190
+ 2. **Lowercase conversion:** All text converted to lowercase for stylometry
191
+ 3. **Chapter heading removal:** Chapter titles and numbering removed
192
+ 4. **Non-narrative text removal:** Tables of contents, dedications, etc. removed
193
+ 5. **Encoding normalization:** Converted to UTF-8
194
+ 6. **Structure preservation:** Paragraph breaks and punctuation maintained
195
+
196
+ **Why lowercase?** Stylometric analysis focuses on word choice, syntax, and style rather than capitalization patterns. Lowercase normalization removes this variable.
197
+
198
+ **Preprocessing code:** Available at https://github.com/ContextLab/llm-stylometry
199
+
200
+ ## Considerations for Using This Dataset
201
+
202
+ ### Known Limitations
203
+
204
+ - **Historical language:** Reflects 19th-century America vocabulary, grammar, and cultural context
205
+ - **Lowercase only:** All text converted to lowercase (not suitable for case-sensitive analysis)
206
+ - **Incomplete corpus:** May not include all of Mark Twain's writings (only public domain works on Gutenberg)
207
+ - **Cleaning artifacts:** Some formatting irregularities may remain from Gutenberg source
208
+ - **Public domain only:** Limited to works published before copyright restrictions
209
+
210
+ ### Intended Use Cases
211
+
212
+ - **Stylometry research:** Authorship attribution, style analysis
213
+ - **Language modeling:** Training author-specific models
214
+ - **Literary analysis:** Computational study of Mark Twain's writing
215
+ - **Historical NLP:** 19th-century America language patterns
216
+ - **Educational:** Teaching computational text analysis
217
+
218
+ ### Out-of-Scope Uses
219
+
220
+ - Case-sensitive text analysis
221
+ - Modern language applications
222
+ - Factual information retrieval
223
+ - Complete scholarly editions (use academic sources)
224
+
225
+ ## Citation
226
+
227
+ If you use this dataset in your research, please cite:
228
+
229
+ ```bibtex
230
+ @article{StroEtal25,
231
+ title={A Stylometric Application of Large Language Models},
232
+ author={Stropkay, Harrison F. and Chen, Jiayi and Jabelli, Mohammad J. L. and Rockmore, Daniel N. and Manning, Jeremy R.},
233
+ journal={arXiv preprint arXiv:XXXX.XXXXX},
234
+ year={2025}
235
+ }
236
+ ```
237
+
238
+ ## Additional Information
239
+
240
+ ### Dataset Curator
241
+
242
+ [ContextLab](https://www.context-lab.com/), Dartmouth College
243
+
244
+ ### Licensing
245
+
246
+ MIT License - Free to use with attribution
247
+
248
+ ### Contact
249
+
250
+ - **Paper & Code:** https://github.com/ContextLab/llm-stylometry
251
+ - **Issues:** https://github.com/ContextLab/llm-stylometry/issues
252
+ - **Contact:** Jeremy R. Manning (jeremy.r.manning@dartmouth.edu)
253
+
254
+ ### Related Resources
255
+
256
+ Explore datasets for all 8 authors in the study:
257
+ - [Jane Austen](https://huggingface.co/datasets/contextlab/austen-corpus)
258
+ - [L. Frank Baum](https://huggingface.co/datasets/contextlab/baum-corpus)
259
+ - [Charles Dickens](https://huggingface.co/datasets/contextlab/dickens-corpus)
260
+ - [F. Scott Fitzgerald](https://huggingface.co/datasets/contextlab/fitzgerald-corpus)
261
+ - [Herman Melville](https://huggingface.co/datasets/contextlab/melville-corpus)
262
+ - [Ruth Plumly Thompson](https://huggingface.co/datasets/contextlab/thompson-corpus)
263
+ - [Mark Twain](https://huggingface.co/datasets/contextlab/twain-corpus)
264
+ - [H.G. Wells](https://huggingface.co/datasets/contextlab/wells-corpus)
265
+
266
+ ### Trained Models
267
+
268
+ Author-specific GPT-2 models trained on these corpora will be available after training completes:
269
+ - https://huggingface.co/contextlab (browse all models)