jeremyrmanning commited on
Commit
aeb45d5
·
verified ·
1 Parent(s): 5968f17

Upload thompson complete works corpus

Browse files
Files changed (14) hide show
  1. 53765.txt +0 -0
  2. 55806.txt +0 -0
  3. 55851.txt +0 -0
  4. 56073.txt +0 -0
  5. 56079.txt +0 -0
  6. 56085.txt +0 -0
  7. 58765.txt +0 -0
  8. 61681.txt +0 -0
  9. 65849.txt +0 -0
  10. 70152.txt +0 -0
  11. 71273.txt +0 -0
  12. 73170.txt +0 -0
  13. 75720.txt +0 -0
  14. README.md +276 -0
53765.txt ADDED
The diff for this file is too large to render. See raw diff
 
55806.txt ADDED
The diff for this file is too large to render. See raw diff
 
55851.txt ADDED
The diff for this file is too large to render. See raw diff
 
56073.txt ADDED
The diff for this file is too large to render. See raw diff
 
56079.txt ADDED
The diff for this file is too large to render. See raw diff
 
56085.txt ADDED
The diff for this file is too large to render. See raw diff
 
58765.txt ADDED
The diff for this file is too large to render. See raw diff
 
61681.txt ADDED
The diff for this file is too large to render. See raw diff
 
65849.txt ADDED
The diff for this file is too large to render. See raw diff
 
70152.txt ADDED
The diff for this file is too large to render. See raw diff
 
71273.txt ADDED
The diff for this file is too large to render. See raw diff
 
73170.txt ADDED
The diff for this file is too large to render. See raw diff
 
75720.txt ADDED
The diff for this file is too large to render. See raw diff
 
README.md ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ task_categories:
5
+ - text-generation
6
+ tags:
7
+ - stylometry
8
+ - authorship-attribution
9
+ - literary-analysis
10
+ - thompson
11
+ - classic-literature
12
+ - project-gutenberg
13
+ size_categories:
14
+ - 1K<n<10K
15
+ pretty_name: Ruth Plumly Thompson Complete Works
16
+ ---
17
+
18
+ # Ruth Plumly Thompson Complete Works Corpus
19
+
20
+ <div style="text-align: center;">
21
+ <img src="https://cdn-avatars.huggingface.co/v1/production/uploads/1654865912089-62a33fd71424f432574c348b.png" alt="ContextLab" width="100"/>
22
+ </div>
23
+
24
+ ## Dataset Description
25
+
26
+ This dataset contains the complete works of **Ruth Plumly Thompson** (1891-1976), preprocessed for computational stylometry research. The texts were sourced from [Project Gutenberg](https://www.gutenberg.org/) and cleaned for use in the paper ["A Stylometric Application of Large Language Models"](https://github.com/ContextLab/llm-stylometry) (Stropkay et al., 2025).
27
+
28
+ The corpus includes **13 books** by Ruth Plumly Thompson, including The Oz book series (books 15-35, continuing Baum's work). All text has been converted to **lowercase** and cleaned of Project Gutenberg headers, footers, and chapter headings to focus on the author's prose style.
29
+
30
+ ### Quick Stats
31
+
32
+ - **Books:** 13
33
+ - **Total characters:** 2,932,685
34
+ - **Total words:** 520,058 (approximate)
35
+ - **Average book length:** 225,591 characters
36
+ - **Format:** Plain text (.txt files)
37
+ - **Language:** English (lowercase)
38
+
39
+ ## Dataset Structure
40
+
41
+ ### Books Included
42
+
43
+ Each `.txt` file contains the complete text of one book:
44
+
45
+ | File | Title |
46
+ |------|-------|
47
+ | `53765.txt` | Kabumpo in Oz |
48
+ | `55806.txt` | Ozoplaning with the Wizard of Oz |
49
+ | `55851.txt` | The Wishing Horse of Oz |
50
+ | `56073.txt` | Captain Salt in Oz |
51
+ | `56079.txt` | Handy Mandy in Oz |
52
+ | `56085.txt` | The Silver Princess in Oz |
53
+ | `58765.txt` | The Cowardly Lion of Oz |
54
+ | `61681.txt` | Grampa in Oz |
55
+ | `65849.txt` | The Lost King of Oz |
56
+ | `70152.txt` | The Hungry Tiger of Oz |
57
+ | `71273.txt` | The Gnome King of Oz |
58
+ | `73170.txt` | The giant horse of Oz |
59
+ | `75720.txt` | Jack Pumpkinhead of Oz |
60
+
61
+
62
+ ### Data Fields
63
+
64
+ - **text:** Complete book text (lowercase, cleaned)
65
+ - **filename:** Project Gutenberg ID
66
+
67
+ ### Data Format
68
+
69
+ All files are plain UTF-8 text:
70
+ - Lowercase characters only
71
+ - Punctuation and structure preserved
72
+ - Paragraph breaks maintained
73
+ - No chapter headings or non-narrative text
74
+
75
+ ## Usage
76
+
77
+ ### Load with `datasets` library
78
+
79
+ ```python
80
+ from datasets import load_dataset
81
+
82
+ # Load entire corpus
83
+ corpus = load_dataset("contextlab/thompson-corpus")
84
+
85
+ # Iterate through books
86
+ for book in corpus['train']:
87
+ print(f"Book length: {len(book['text']):,} characters")
88
+ print(book['text'][:200]) # First 200 characters
89
+ print()
90
+ ```
91
+
92
+ ### Load specific file
93
+
94
+ ```python
95
+ # Load single book by filename
96
+ dataset = load_dataset(
97
+ "contextlab/thompson-corpus",
98
+ data_files="54.txt" # Specific Gutenberg ID
99
+ )
100
+
101
+ text = dataset['train'][0]['text']
102
+ print(f"Loaded {len(text):,} characters")
103
+ ```
104
+
105
+ ### Download files directly
106
+
107
+ ```python
108
+ from huggingface_hub import hf_hub_download
109
+
110
+ # Download one book
111
+ file_path = hf_hub_download(
112
+ repo_id="contextlab/thompson-corpus",
113
+ filename="54.txt",
114
+ repo_type="dataset"
115
+ )
116
+
117
+ with open(file_path, 'r') as f:
118
+ text = f.read()
119
+ ```
120
+
121
+ ### Use for training language models
122
+
123
+ ```python
124
+ from datasets import load_dataset
125
+ from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
126
+
127
+ # Load corpus
128
+ corpus = load_dataset("contextlab/thompson-corpus")
129
+
130
+ # Combine all books into single text
131
+ full_text = " ".join([book['text'] for book in corpus['train']])
132
+
133
+ # Tokenize
134
+ tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
135
+
136
+ def tokenize_function(examples):
137
+ return tokenizer(examples['text'], truncation=True, max_length=1024)
138
+
139
+ tokenized = corpus.map(tokenize_function, batched=True, remove_columns=['text'])
140
+
141
+ # Initialize model
142
+ model = GPT2LMHeadModel.from_pretrained("gpt2")
143
+
144
+ # Set up training
145
+ training_args = TrainingArguments(
146
+ output_dir="./results",
147
+ num_train_epochs=10,
148
+ per_device_train_batch_size=8,
149
+ save_steps=1000,
150
+ )
151
+
152
+ # Train
153
+ trainer = Trainer(
154
+ model=model,
155
+ args=training_args,
156
+ train_dataset=tokenized['train']
157
+ )
158
+
159
+ trainer.train()
160
+ ```
161
+
162
+ ### Analyze text statistics
163
+
164
+ ```python
165
+ from datasets import load_dataset
166
+ import numpy as np
167
+
168
+ corpus = load_dataset("contextlab/thompson-corpus")
169
+
170
+ # Calculate statistics
171
+ lengths = [len(book['text']) for book in corpus['train']]
172
+
173
+ print(f"Books: {len(lengths)}")
174
+ print(f"Total characters: {sum(lengths):,}")
175
+ print(f"Mean length: {np.mean(lengths):,.0f} characters")
176
+ print(f"Std length: {np.std(lengths):,.0f} characters")
177
+ print(f"Min length: {min(lengths):,} characters")
178
+ print(f"Max length: {max(lengths):,} characters")
179
+ ```
180
+
181
+ ## Dataset Creation
182
+
183
+ ### Source Data
184
+
185
+ All texts sourced from [Project Gutenberg](https://www.gutenberg.org/), a library of over 70,000 free eBooks in the public domain.
186
+
187
+ **Project Gutenberg Links:**
188
+ - Books identified by Gutenberg ID numbers (filenames)
189
+ - Example: `54.txt` corresponds to https://www.gutenberg.org/ebooks/54
190
+ - All works are in the public domain
191
+
192
+ ### Preprocessing Pipeline
193
+
194
+ The raw Project Gutenberg texts underwent the following preprocessing:
195
+
196
+ 1. **Header/footer removal:** Project Gutenberg license text and metadata removed
197
+ 2. **Lowercase conversion:** All text converted to lowercase for stylometry
198
+ 3. **Chapter heading removal:** Chapter titles and numbering removed
199
+ 4. **Non-narrative text removal:** Tables of contents, dedications, etc. removed
200
+ 5. **Encoding normalization:** Converted to UTF-8
201
+ 6. **Structure preservation:** Paragraph breaks and punctuation maintained
202
+
203
+ **Why lowercase?** Stylometric analysis focuses on word choice, syntax, and style rather than capitalization patterns. Lowercase normalization removes this variable.
204
+
205
+ **Preprocessing code:** Available at https://github.com/ContextLab/llm-stylometry
206
+
207
+ ## Considerations for Using This Dataset
208
+
209
+ ### Known Limitations
210
+
211
+ - **Historical language:** Reflects early-to-mid 20th century America vocabulary, grammar, and cultural context
212
+ - **Lowercase only:** All text converted to lowercase (not suitable for case-sensitive analysis)
213
+ - **Incomplete corpus:** May not include all of Ruth Plumly Thompson's writings (only public domain works on Gutenberg)
214
+ - **Cleaning artifacts:** Some formatting irregularities may remain from Gutenberg source
215
+ - **Public domain only:** Limited to works published before copyright restrictions
216
+
217
+ ### Intended Use Cases
218
+
219
+ - **Stylometry research:** Authorship attribution, style analysis
220
+ - **Language modeling:** Training author-specific models
221
+ - **Literary analysis:** Computational study of Ruth Plumly Thompson's writing
222
+ - **Historical NLP:** early-to-mid 20th century America language patterns
223
+ - **Educational:** Teaching computational text analysis
224
+
225
+ ### Out-of-Scope Uses
226
+
227
+ - Case-sensitive text analysis
228
+ - Modern language applications
229
+ - Factual information retrieval
230
+ - Complete scholarly editions (use academic sources)
231
+
232
+ ## Citation
233
+
234
+ If you use this dataset in your research, please cite:
235
+
236
+ ```bibtex
237
+ @article{StroEtal25,
238
+ title={A Stylometric Application of Large Language Models},
239
+ author={Stropkay, Harrison F. and Chen, Jiayi and Jabelli, Mohammad J. L. and Rockmore, Daniel N. and Manning, Jeremy R.},
240
+ journal={arXiv preprint arXiv:XXXX.XXXXX},
241
+ year={2025}
242
+ }
243
+ ```
244
+
245
+ ## Additional Information
246
+
247
+ ### Dataset Curator
248
+
249
+ [ContextLab](https://www.context-lab.com/), Dartmouth College
250
+
251
+ ### Licensing
252
+
253
+ MIT License - Free to use with attribution
254
+
255
+ ### Contact
256
+
257
+ - **Paper & Code:** https://github.com/ContextLab/llm-stylometry
258
+ - **Issues:** https://github.com/ContextLab/llm-stylometry/issues
259
+ - **Contact:** Jeremy R. Manning (jeremy.r.manning@dartmouth.edu)
260
+
261
+ ### Related Resources
262
+
263
+ Explore datasets for all 8 authors in the study:
264
+ - [Jane Austen](https://huggingface.co/datasets/contextlab/austen-corpus)
265
+ - [L. Frank Baum](https://huggingface.co/datasets/contextlab/baum-corpus)
266
+ - [Charles Dickens](https://huggingface.co/datasets/contextlab/dickens-corpus)
267
+ - [F. Scott Fitzgerald](https://huggingface.co/datasets/contextlab/fitzgerald-corpus)
268
+ - [Herman Melville](https://huggingface.co/datasets/contextlab/melville-corpus)
269
+ - [Ruth Plumly Thompson](https://huggingface.co/datasets/contextlab/thompson-corpus)
270
+ - [Mark Twain](https://huggingface.co/datasets/contextlab/twain-corpus)
271
+ - [H.G. Wells](https://huggingface.co/datasets/contextlab/wells-corpus)
272
+
273
+ ### Trained Models
274
+
275
+ Author-specific GPT-2 models trained on these corpora will be available after training completes:
276
+ - https://huggingface.co/contextlab (browse all models)