jeremyrmanning commited on
Commit
167a42c
·
verified ·
1 Parent(s): 2ebf2a8

Upload austen complete works corpus

Browse files
Files changed (8) hide show
  1. 105.txt +0 -0
  2. 121.txt +0 -0
  3. 1342.txt +0 -0
  4. 141.txt +0 -0
  5. 158.txt +0 -0
  6. 161.txt +0 -0
  7. 946.txt +0 -0
  8. README.md +270 -0
105.txt ADDED
The diff for this file is too large to render. See raw diff
 
121.txt ADDED
The diff for this file is too large to render. See raw diff
 
1342.txt ADDED
The diff for this file is too large to render. See raw diff
 
141.txt ADDED
The diff for this file is too large to render. See raw diff
 
158.txt ADDED
The diff for this file is too large to render. See raw diff
 
161.txt ADDED
The diff for this file is too large to render. See raw diff
 
946.txt ADDED
The diff for this file is too large to render. See raw diff
 
README.md ADDED
@@ -0,0 +1,270 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ task_categories:
5
+ - text-generation
6
+ tags:
7
+ - stylometry
8
+ - authorship-attribution
9
+ - literary-analysis
10
+ - austen
11
+ - classic-literature
12
+ - project-gutenberg
13
+ size_categories:
14
+ - n<1K
15
+ pretty_name: Jane Austen Complete Works
16
+ ---
17
+
18
+ # Jane Austen Complete Works Corpus
19
+
20
+ <div style="text-align: center;">
21
+ <img src="https://cdn-avatars.huggingface.co/v1/production/uploads/1654865912089-62a33fd71424f432574c348b.png" alt="ContextLab" width="100"/>
22
+ </div>
23
+
24
+ ## Dataset Description
25
+
26
+ This dataset contains the complete works of **Jane Austen** (1775-1817), preprocessed for computational stylometry research. The texts were sourced from [Project Gutenberg](https://www.gutenberg.org/) and cleaned for use in the paper ["A Stylometric Application of Large Language Models"](https://github.com/ContextLab/llm-stylometry) (Stropkay et al., 2025).
27
+
28
+ The corpus includes **7 books** by Jane Austen, including Pride and Prejudice, Sense and Sensibility, Emma. All text has been converted to **lowercase** and cleaned of Project Gutenberg headers, footers, and chapter headings to focus on the author's prose style.
29
+
30
+ ### Quick Stats
31
+
32
+ - **Books:** 7
33
+ - **Total characters:** 4,127,071
34
+ - **Total words:** 740,058 (approximate)
35
+ - **Average book length:** 589,581 characters
36
+ - **Format:** Plain text (.txt files)
37
+ - **Language:** English (lowercase)
38
+
39
+ ## Dataset Structure
40
+
41
+ ### Books Included
42
+
43
+ Each `.txt` file contains the complete text of one book:
44
+
45
+ | File | Title |
46
+ |------|-------|
47
+ | `105.txt` | Persuasion |
48
+ | `121.txt` | Northanger Abbey |
49
+ | `1342.txt` | Pride and Prejudice |
50
+ | `141.txt` | Mansfield Park |
51
+ | `158.txt` | Emma |
52
+ | `161.txt` | Sense and Sensibility |
53
+ | `946.txt` | Lady Susan |
54
+
55
+
56
+ ### Data Fields
57
+
58
+ - **text:** Complete book text (lowercase, cleaned)
59
+ - **filename:** Project Gutenberg ID
60
+
61
+ ### Data Format
62
+
63
+ All files are plain UTF-8 text:
64
+ - Lowercase characters only
65
+ - Punctuation and structure preserved
66
+ - Paragraph breaks maintained
67
+ - No chapter headings or non-narrative text
68
+
69
+ ## Usage
70
+
71
+ ### Load with `datasets` library
72
+
73
+ ```python
74
+ from datasets import load_dataset
75
+
76
+ # Load entire corpus
77
+ corpus = load_dataset("contextlab/austen-corpus")
78
+
79
+ # Iterate through books
80
+ for book in corpus['train']:
81
+ print(f"Book length: {len(book['text']):,} characters")
82
+ print(book['text'][:200]) # First 200 characters
83
+ print()
84
+ ```
85
+
86
+ ### Load specific file
87
+
88
+ ```python
89
+ # Load single book by filename
90
+ dataset = load_dataset(
91
+ "contextlab/austen-corpus",
92
+ data_files="54.txt" # Specific Gutenberg ID
93
+ )
94
+
95
+ text = dataset['train'][0]['text']
96
+ print(f"Loaded {len(text):,} characters")
97
+ ```
98
+
99
+ ### Download files directly
100
+
101
+ ```python
102
+ from huggingface_hub import hf_hub_download
103
+
104
+ # Download one book
105
+ file_path = hf_hub_download(
106
+ repo_id="contextlab/austen-corpus",
107
+ filename="54.txt",
108
+ repo_type="dataset"
109
+ )
110
+
111
+ with open(file_path, 'r') as f:
112
+ text = f.read()
113
+ ```
114
+
115
+ ### Use for training language models
116
+
117
+ ```python
118
+ from datasets import load_dataset
119
+ from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
120
+
121
+ # Load corpus
122
+ corpus = load_dataset("contextlab/austen-corpus")
123
+
124
+ # Combine all books into single text
125
+ full_text = " ".join([book['text'] for book in corpus['train']])
126
+
127
+ # Tokenize
128
+ tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
129
+
130
+ def tokenize_function(examples):
131
+ return tokenizer(examples['text'], truncation=True, max_length=1024)
132
+
133
+ tokenized = corpus.map(tokenize_function, batched=True, remove_columns=['text'])
134
+
135
+ # Initialize model
136
+ model = GPT2LMHeadModel.from_pretrained("gpt2")
137
+
138
+ # Set up training
139
+ training_args = TrainingArguments(
140
+ output_dir="./results",
141
+ num_train_epochs=10,
142
+ per_device_train_batch_size=8,
143
+ save_steps=1000,
144
+ )
145
+
146
+ # Train
147
+ trainer = Trainer(
148
+ model=model,
149
+ args=training_args,
150
+ train_dataset=tokenized['train']
151
+ )
152
+
153
+ trainer.train()
154
+ ```
155
+
156
+ ### Analyze text statistics
157
+
158
+ ```python
159
+ from datasets import load_dataset
160
+ import numpy as np
161
+
162
+ corpus = load_dataset("contextlab/austen-corpus")
163
+
164
+ # Calculate statistics
165
+ lengths = [len(book['text']) for book in corpus['train']]
166
+
167
+ print(f"Books: {len(lengths)}")
168
+ print(f"Total characters: {sum(lengths):,}")
169
+ print(f"Mean length: {np.mean(lengths):,.0f} characters")
170
+ print(f"Std length: {np.std(lengths):,.0f} characters")
171
+ print(f"Min length: {min(lengths):,} characters")
172
+ print(f"Max length: {max(lengths):,} characters")
173
+ ```
174
+
175
+ ## Dataset Creation
176
+
177
+ ### Source Data
178
+
179
+ All texts sourced from [Project Gutenberg](https://www.gutenberg.org/), a library of over 70,000 free eBooks in the public domain.
180
+
181
+ **Project Gutenberg Links:**
182
+ - Books identified by Gutenberg ID numbers (filenames)
183
+ - Example: `54.txt` corresponds to https://www.gutenberg.org/ebooks/54
184
+ - All works are in the public domain
185
+
186
+ ### Preprocessing Pipeline
187
+
188
+ The raw Project Gutenberg texts underwent the following preprocessing:
189
+
190
+ 1. **Header/footer removal:** Project Gutenberg license text and metadata removed
191
+ 2. **Lowercase conversion:** All text converted to lowercase for stylometry
192
+ 3. **Chapter heading removal:** Chapter titles and numbering removed
193
+ 4. **Non-narrative text removal:** Tables of contents, dedications, etc. removed
194
+ 5. **Encoding normalization:** Converted to UTF-8
195
+ 6. **Structure preservation:** Paragraph breaks and punctuation maintained
196
+
197
+ **Why lowercase?** Stylometric analysis focuses on word choice, syntax, and style rather than capitalization patterns. Lowercase normalization removes this variable.
198
+
199
+ **Preprocessing code:** Available at https://github.com/ContextLab/llm-stylometry
200
+
201
+ ## Considerations for Using This Dataset
202
+
203
+ ### Known Limitations
204
+
205
+ - **Historical language:** Reflects 19th-century England vocabulary, grammar, and cultural context
206
+ - **Lowercase only:** All text converted to lowercase (not suitable for case-sensitive analysis)
207
+ - **Incomplete corpus:** May not include all of Jane Austen's writings (only public domain works on Gutenberg)
208
+ - **Cleaning artifacts:** Some formatting irregularities may remain from Gutenberg source
209
+ - **Public domain only:** Limited to works published before copyright restrictions
210
+
211
+ ### Intended Use Cases
212
+
213
+ - **Stylometry research:** Authorship attribution, style analysis
214
+ - **Language modeling:** Training author-specific models
215
+ - **Literary analysis:** Computational study of Jane Austen's writing
216
+ - **Historical NLP:** 19th-century England language patterns
217
+ - **Educational:** Teaching computational text analysis
218
+
219
+ ### Out-of-Scope Uses
220
+
221
+ - Case-sensitive text analysis
222
+ - Modern language applications
223
+ - Factual information retrieval
224
+ - Complete scholarly editions (use academic sources)
225
+
226
+ ## Citation
227
+
228
+ If you use this dataset in your research, please cite:
229
+
230
+ ```bibtex
231
+ @article{StroEtal25,
232
+ title={A Stylometric Application of Large Language Models},
233
+ author={Stropkay, Harrison F. and Chen, Jiayi and Jabelli, Mohammad J. L. and Rockmore, Daniel N. and Manning, Jeremy R.},
234
+ journal={arXiv preprint arXiv:XXXX.XXXXX},
235
+ year={2025}
236
+ }
237
+ ```
238
+
239
+ ## Additional Information
240
+
241
+ ### Dataset Curator
242
+
243
+ [ContextLab](https://www.context-lab.com/), Dartmouth College
244
+
245
+ ### Licensing
246
+
247
+ MIT License - Free to use with attribution
248
+
249
+ ### Contact
250
+
251
+ - **Paper & Code:** https://github.com/ContextLab/llm-stylometry
252
+ - **Issues:** https://github.com/ContextLab/llm-stylometry/issues
253
+ - **Contact:** Jeremy R. Manning (jeremy.r.manning@dartmouth.edu)
254
+
255
+ ### Related Resources
256
+
257
+ Explore datasets for all 8 authors in the study:
258
+ - [Jane Austen](https://huggingface.co/datasets/contextlab/austen-corpus)
259
+ - [L. Frank Baum](https://huggingface.co/datasets/contextlab/baum-corpus)
260
+ - [Charles Dickens](https://huggingface.co/datasets/contextlab/dickens-corpus)
261
+ - [F. Scott Fitzgerald](https://huggingface.co/datasets/contextlab/fitzgerald-corpus)
262
+ - [Herman Melville](https://huggingface.co/datasets/contextlab/melville-corpus)
263
+ - [Ruth Plumly Thompson](https://huggingface.co/datasets/contextlab/thompson-corpus)
264
+ - [Mark Twain](https://huggingface.co/datasets/contextlab/twain-corpus)
265
+ - [H.G. Wells](https://huggingface.co/datasets/contextlab/wells-corpus)
266
+
267
+ ### Trained Models
268
+
269
+ Author-specific GPT-2 models trained on these corpora will be available after training completes:
270
+ - https://huggingface.co/contextlab (browse all models)