jeremyrmanning commited on
Commit
21758eb
·
verified ·
1 Parent(s): af24716

Upload baum complete works corpus

Browse files
Files changed (15) hide show
  1. 22566.txt +0 -0
  2. 26624.txt +0 -0
  3. 30852.txt +0 -0
  4. 33361.txt +0 -0
  5. 39868.txt +0 -0
  6. 41667.txt +0 -0
  7. 43936.txt +0 -0
  8. 50194.txt +0 -0
  9. 52176.txt +0 -0
  10. 54.txt +0 -0
  11. 955.txt +0 -0
  12. 957.txt +0 -0
  13. 958.txt +0 -0
  14. 959.txt +0 -0
  15. README.md +266 -0
22566.txt ADDED
The diff for this file is too large to render. See raw diff
 
26624.txt ADDED
The diff for this file is too large to render. See raw diff
 
30852.txt ADDED
The diff for this file is too large to render. See raw diff
 
33361.txt ADDED
The diff for this file is too large to render. See raw diff
 
39868.txt ADDED
The diff for this file is too large to render. See raw diff
 
41667.txt ADDED
The diff for this file is too large to render. See raw diff
 
43936.txt ADDED
The diff for this file is too large to render. See raw diff
 
50194.txt ADDED
The diff for this file is too large to render. See raw diff
 
52176.txt ADDED
The diff for this file is too large to render. See raw diff
 
54.txt ADDED
The diff for this file is too large to render. See raw diff
 
955.txt ADDED
The diff for this file is too large to render. See raw diff
 
957.txt ADDED
The diff for this file is too large to render. See raw diff
 
958.txt ADDED
The diff for this file is too large to render. See raw diff
 
959.txt ADDED
The diff for this file is too large to render. See raw diff
 
README.md ADDED
@@ -0,0 +1,266 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ task_categories:
5
+ - text-generation
6
+ tags:
7
+ - stylometry
8
+ - authorship-attribution
9
+ - literary-analysis
10
+ - baum
11
+ - classic-literature
12
+ - project-gutenberg
13
+ size_categories:
14
+ - 1K<n<10K
15
+ pretty_name: L. Frank Baum Complete Works
16
+ ---
17
+
18
+ # L. Frank Baum Complete Works Corpus
19
+
20
+ <div style="text-align: center;">
21
+ <img src="https://raw.githubusercontent.com/ContextLab/llm-stylometry/main/assets/CDL_Avatar.png" alt="Context Lab" width="200"/>
22
+ </div>
23
+
24
+ ## Dataset Description
25
+
26
+ This dataset contains the complete works of **L. Frank Baum** (1856-1919), preprocessed for computational stylometry research. The texts were sourced from [Project Gutenberg](https://www.gutenberg.org/) and cleaned for use in the paper ["A Stylometric Application of Large Language Models"](https://github.com/ContextLab/llm-stylometry) (Stropkay et al., 2025).
27
+
28
+ The corpus includes **14 books** by L. Frank Baum, including The Wonderful Wizard of Oz series (14 books). All text has been converted to **lowercase** and cleaned of Project Gutenberg headers, footers, and chapter headings to focus on the author's prose style.
29
+
30
+ ### Quick Stats
31
+
32
+ - **Books:** 14
33
+ - **Total characters:** 3,354,451
34
+ - **Total words:** 617,021 (approximate)
35
+ - **Average book length:** 239,603 characters
36
+ - **Format:** Plain text (.txt files)
37
+ - **Language:** English (lowercase)
38
+
39
+ ## Dataset Structure
40
+
41
+ ### Files
42
+
43
+ Each `.txt` file contains the complete text of one book, identified by its Project Gutenberg ID:
44
+
45
+ - `22566.txt` - Project Gutenberg book
46
+ - `26624.txt` - Project Gutenberg book
47
+ - `30852.txt` - Project Gutenberg book
48
+ - `33361.txt` - Project Gutenberg book
49
+ - `39868.txt` - Project Gutenberg book
50
+ - ... and 9 more books
51
+
52
+ ### Data Fields
53
+
54
+ - **text:** Complete book text (lowercase, cleaned)
55
+ - **filename:** Project Gutenberg ID
56
+
57
+ ### Data Format
58
+
59
+ All files are plain UTF-8 text:
60
+ - Lowercase characters only
61
+ - Punctuation and structure preserved
62
+ - Paragraph breaks maintained
63
+ - No chapter headings or non-narrative text
64
+
65
+ ## Usage
66
+
67
+ ### Load with `datasets` library
68
+
69
+ ```python
70
+ from datasets import load_dataset
71
+
72
+ # Load entire corpus
73
+ corpus = load_dataset("contextlab/baum-corpus")
74
+
75
+ # Iterate through books
76
+ for book in corpus['train']:
77
+ print(f"Book length: {len(book['text']):,} characters")
78
+ print(book['text'][:200]) # First 200 characters
79
+ print()
80
+ ```
81
+
82
+ ### Load specific file
83
+
84
+ ```python
85
+ # Load single book by filename
86
+ dataset = load_dataset(
87
+ "contextlab/baum-corpus",
88
+ data_files="54.txt" # Specific Gutenberg ID
89
+ )
90
+
91
+ text = dataset['train'][0]['text']
92
+ print(f"Loaded {len(text):,} characters")
93
+ ```
94
+
95
+ ### Download files directly
96
+
97
+ ```python
98
+ from huggingface_hub import hf_hub_download
99
+
100
+ # Download one book
101
+ file_path = hf_hub_download(
102
+ repo_id="contextlab/baum-corpus",
103
+ filename="54.txt",
104
+ repo_type="dataset"
105
+ )
106
+
107
+ with open(file_path, 'r') as f:
108
+ text = f.read()
109
+ ```
110
+
111
+ ### Use for training language models
112
+
113
+ ```python
114
+ from datasets import load_dataset
115
+ from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
116
+
117
+ # Load corpus
118
+ corpus = load_dataset("contextlab/baum-corpus")
119
+
120
+ # Combine all books into single text
121
+ full_text = " ".join([book['text'] for book in corpus['train']])
122
+
123
+ # Tokenize
124
+ tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
125
+
126
+ def tokenize_function(examples):
127
+ return tokenizer(examples['text'], truncation=True, max_length=1024)
128
+
129
+ tokenized = corpus.map(tokenize_function, batched=True, remove_columns=['text'])
130
+
131
+ # Initialize model
132
+ model = GPT2LMHeadModel.from_pretrained("gpt2")
133
+
134
+ # Set up training
135
+ training_args = TrainingArguments(
136
+ output_dir="./results",
137
+ num_train_epochs=10,
138
+ per_device_train_batch_size=8,
139
+ save_steps=1000,
140
+ )
141
+
142
+ # Train
143
+ trainer = Trainer(
144
+ model=model,
145
+ args=training_args,
146
+ train_dataset=tokenized['train']
147
+ )
148
+
149
+ trainer.train()
150
+ ```
151
+
152
+ ### Analyze text statistics
153
+
154
+ ```python
155
+ from datasets import load_dataset
156
+ import numpy as np
157
+
158
+ corpus = load_dataset("contextlab/baum-corpus")
159
+
160
+ # Calculate statistics
161
+ lengths = [len(book['text']) for book in corpus['train']]
162
+
163
+ print(f"Books: {len(lengths)}")
164
+ print(f"Total characters: {sum(lengths):,}")
165
+ print(f"Mean length: {np.mean(lengths):,.0f} characters")
166
+ print(f"Std length: {np.std(lengths):,.0f} characters")
167
+ print(f"Min length: {min(lengths):,} characters")
168
+ print(f"Max length: {max(lengths):,} characters")
169
+ ```
170
+
171
+ ## Dataset Creation
172
+
173
+ ### Source Data
174
+
175
+ All texts sourced from [Project Gutenberg](https://www.gutenberg.org/), a library of over 70,000 free eBooks in the public domain.
176
+
177
+ **Project Gutenberg Links:**
178
+ - Books identified by Gutenberg ID numbers (filenames)
179
+ - Example: `54.txt` corresponds to https://www.gutenberg.org/ebooks/54
180
+ - All works are in the public domain
181
+
182
+ ### Preprocessing Pipeline
183
+
184
+ The raw Project Gutenberg texts underwent the following preprocessing:
185
+
186
+ 1. **Header/footer removal:** Project Gutenberg license text and metadata removed
187
+ 2. **Lowercase conversion:** All text converted to lowercase for stylometry
188
+ 3. **Chapter heading removal:** Chapter titles and numbering removed
189
+ 4. **Non-narrative text removal:** Tables of contents, dedications, etc. removed
190
+ 5. **Encoding normalization:** Converted to UTF-8
191
+ 6. **Structure preservation:** Paragraph breaks and punctuation maintained
192
+
193
+ **Why lowercase?** Stylometric analysis focuses on word choice, syntax, and style rather than capitalization patterns. Lowercase normalization removes this variable.
194
+
195
+ **Preprocessing code:** Available at https://github.com/ContextLab/llm-stylometry
196
+
197
+ ## Considerations for Using This Dataset
198
+
199
+ ### Known Limitations
200
+
201
+ - **Historical language:** Reflects late 19th to early 20th century America vocabulary, grammar, and cultural context
202
+ - **Lowercase only:** All text converted to lowercase (not suitable for case-sensitive analysis)
203
+ - **Incomplete corpus:** May not include all of L. Frank Baum's writings (only public domain works on Gutenberg)
204
+ - **Cleaning artifacts:** Some formatting irregularities may remain from Gutenberg source
205
+ - **Public domain only:** Limited to works published before copyright restrictions
206
+
207
+ ### Intended Use Cases
208
+
209
+ - **Stylometry research:** Authorship attribution, style analysis
210
+ - **Language modeling:** Training author-specific models
211
+ - **Literary analysis:** Computational study of L. Frank Baum's writing
212
+ - **Historical NLP:** late 19th to early 20th century America language patterns
213
+ - **Educational:** Teaching computational text analysis
214
+
215
+ ### Out-of-Scope Uses
216
+
217
+ - Case-sensitive text analysis
218
+ - Modern language applications
219
+ - Factual information retrieval
220
+ - Complete scholarly editions (use academic sources)
221
+
222
+ ## Citation
223
+
224
+ If you use this dataset in your research, please cite:
225
+
226
+ ```bibtex
227
+ @article{StroEtal25,
228
+ title={A Stylometric Application of Large Language Models},
229
+ author={Stropkay, Harrison F. and Chen, Jiayi and Jabelli, Mohammad J. L. and Rockmore, Daniel N. and Manning, Jeremy R.},
230
+ journal={arXiv preprint arXiv:XXXX.XXXXX},
231
+ year={2025}
232
+ }
233
+ ```
234
+
235
+ ## Additional Information
236
+
237
+ ### Dataset Curator
238
+
239
+ [Context Lab](https://www.context-lab.com/), Dartmouth College
240
+
241
+ ### Licensing
242
+
243
+ MIT License - Free to use with attribution
244
+
245
+ ### Contact
246
+
247
+ - **Paper & Code:** https://github.com/ContextLab/llm-stylometry
248
+ - **Issues:** https://github.com/ContextLab/llm-stylometry/issues
249
+ - **Contact:** Jeremy R. Manning (jeremy.r.manning@dartmouth.edu)
250
+
251
+ ### Related Resources
252
+
253
+ Explore datasets for all 8 authors in the study:
254
+ - [Jane Austen](https://huggingface.co/datasets/contextlab/austen-corpus)
255
+ - [L. Frank Baum](https://huggingface.co/datasets/contextlab/baum-corpus)
256
+ - [Charles Dickens](https://huggingface.co/datasets/contextlab/dickens-corpus)
257
+ - [F. Scott Fitzgerald](https://huggingface.co/datasets/contextlab/fitzgerald-corpus)
258
+ - [Herman Melville](https://huggingface.co/datasets/contextlab/melville-corpus)
259
+ - [Ruth Plumly Thompson](https://huggingface.co/datasets/contextlab/thompson-corpus)
260
+ - [Mark Twain](https://huggingface.co/datasets/contextlab/twain-corpus)
261
+ - [H.G. Wells](https://huggingface.co/datasets/contextlab/wells-corpus)
262
+
263
+ ### Trained Models
264
+
265
+ Author-specific GPT-2 models trained on these corpora will be available after training completes:
266
+ - https://huggingface.co/contextlab (browse all models)