jeremyrmanning commited on
Commit
f2aeab9
·
verified ·
1 Parent(s): 32d0ee9

Upload dickens complete works corpus

Browse files
Files changed (15) hide show
  1. 1023.txt +0 -0
  2. 1400.txt +0 -0
  3. 24022.txt +0 -0
  4. 580.txt +0 -0
  5. 675.txt +0 -0
  6. 700.txt +0 -0
  7. 730.txt +0 -0
  8. 766.txt +0 -0
  9. 786.txt +0 -0
  10. 821.txt +0 -0
  11. 963.txt +0 -0
  12. 967.txt +0 -0
  13. 968.txt +0 -0
  14. 98.txt +0 -0
  15. README.md +277 -0
1023.txt ADDED
The diff for this file is too large to render. See raw diff
 
1400.txt ADDED
The diff for this file is too large to render. See raw diff
 
24022.txt ADDED
The diff for this file is too large to render. See raw diff
 
580.txt ADDED
The diff for this file is too large to render. See raw diff
 
675.txt ADDED
The diff for this file is too large to render. See raw diff
 
700.txt ADDED
The diff for this file is too large to render. See raw diff
 
730.txt ADDED
The diff for this file is too large to render. See raw diff
 
766.txt ADDED
The diff for this file is too large to render. See raw diff
 
786.txt ADDED
The diff for this file is too large to render. See raw diff
 
821.txt ADDED
The diff for this file is too large to render. See raw diff
 
963.txt ADDED
The diff for this file is too large to render. See raw diff
 
967.txt ADDED
The diff for this file is too large to render. See raw diff
 
968.txt ADDED
The diff for this file is too large to render. See raw diff
 
98.txt ADDED
The diff for this file is too large to render. See raw diff
 
README.md ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ task_categories:
5
+ - text-generation
6
+ tags:
7
+ - stylometry
8
+ - authorship-attribution
9
+ - literary-analysis
10
+ - dickens
11
+ - classic-literature
12
+ - project-gutenberg
13
+ size_categories:
14
+ - 1K<n<10K
15
+ pretty_name: Charles Dickens Complete Works
16
+ ---
17
+
18
+ # Charles Dickens Complete Works Corpus
19
+
20
+ <div style="text-align: center;">
21
+ <img src="https://cdn-avatars.huggingface.co/v1/production/uploads/1654865912089-62a33fd71424f432574c348b.png" alt="ContextLab" width="100"/>
22
+ </div>
23
+
24
+ ## Dataset Description
25
+
26
+ This dataset contains the complete works of **Charles Dickens** (1812-1870), preprocessed for computational stylometry research. The texts were sourced from [Project Gutenberg](https://www.gutenberg.org/) and cleaned for use in the paper ["A Stylometric Application of Large Language Models"](https://github.com/ContextLab/llm-stylometry) (Stropkay et al., 2025).
27
+
28
+ The corpus includes **14 books** by Charles Dickens, including A Tale of Two Cities, Great Expectations, Oliver Twist, David Copperfield. All text has been converted to **lowercase** and cleaned of Project Gutenberg headers, footers, and chapter headings to focus on the author's prose style.
29
+
30
+ ### Quick Stats
31
+
32
+ - **Books:** 14
33
+ - **Total characters:** 18,205,497
34
+ - **Total words:** 3,270,073 (approximate)
35
+ - **Average book length:** 1,300,392 characters
36
+ - **Format:** Plain text (.txt files)
37
+ - **Language:** English (lowercase)
38
+
39
+ ## Dataset Structure
40
+
41
+ ### Books Included
42
+
43
+ Each `.txt` file contains the complete text of one book:
44
+
45
+ | File | Title |
46
+ |------|-------|
47
+ | `1023.txt` | Bleak House |
48
+ | `1400.txt` | Great Expectations |
49
+ | `24022.txt` | A Christmas Carol |
50
+ | `580.txt` | The Pickwick Papers |
51
+ | `675.txt` | American Notes |
52
+ | `700.txt` | The Old Curiosity Shop |
53
+ | `730.txt` | Oliver Twist |
54
+ | `766.txt` | David Copperfield |
55
+ | `786.txt` | Hard Times |
56
+ | `821.txt` | Dombey and Son |
57
+ | `963.txt` | Little Dorrit |
58
+ | `967.txt` | Nicholas Nickleby |
59
+ | `968.txt` | Martin Chuzzlewit |
60
+ | `98.txt` | A Tale of Two Cities |
61
+
62
+
63
+ ### Data Fields
64
+
65
+ - **text:** Complete book text (lowercase, cleaned)
66
+ - **filename:** Project Gutenberg ID
67
+
68
+ ### Data Format
69
+
70
+ All files are plain UTF-8 text:
71
+ - Lowercase characters only
72
+ - Punctuation and structure preserved
73
+ - Paragraph breaks maintained
74
+ - No chapter headings or non-narrative text
75
+
76
+ ## Usage
77
+
78
+ ### Load with `datasets` library
79
+
80
+ ```python
81
+ from datasets import load_dataset
82
+
83
+ # Load entire corpus
84
+ corpus = load_dataset("contextlab/dickens-corpus")
85
+
86
+ # Iterate through books
87
+ for book in corpus['train']:
88
+ print(f"Book length: {len(book['text']):,} characters")
89
+ print(book['text'][:200]) # First 200 characters
90
+ print()
91
+ ```
92
+
93
+ ### Load specific file
94
+
95
+ ```python
96
+ # Load single book by filename
97
+ dataset = load_dataset(
98
+ "contextlab/dickens-corpus",
99
+ data_files="54.txt" # Specific Gutenberg ID
100
+ )
101
+
102
+ text = dataset['train'][0]['text']
103
+ print(f"Loaded {len(text):,} characters")
104
+ ```
105
+
106
+ ### Download files directly
107
+
108
+ ```python
109
+ from huggingface_hub import hf_hub_download
110
+
111
+ # Download one book
112
+ file_path = hf_hub_download(
113
+ repo_id="contextlab/dickens-corpus",
114
+ filename="54.txt",
115
+ repo_type="dataset"
116
+ )
117
+
118
+ with open(file_path, 'r') as f:
119
+ text = f.read()
120
+ ```
121
+
122
+ ### Use for training language models
123
+
124
+ ```python
125
+ from datasets import load_dataset
126
+ from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
127
+
128
+ # Load corpus
129
+ corpus = load_dataset("contextlab/dickens-corpus")
130
+
131
+ # Combine all books into single text
132
+ full_text = " ".join([book['text'] for book in corpus['train']])
133
+
134
+ # Tokenize
135
+ tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
136
+
137
+ def tokenize_function(examples):
138
+ return tokenizer(examples['text'], truncation=True, max_length=1024)
139
+
140
+ tokenized = corpus.map(tokenize_function, batched=True, remove_columns=['text'])
141
+
142
+ # Initialize model
143
+ model = GPT2LMHeadModel.from_pretrained("gpt2")
144
+
145
+ # Set up training
146
+ training_args = TrainingArguments(
147
+ output_dir="./results",
148
+ num_train_epochs=10,
149
+ per_device_train_batch_size=8,
150
+ save_steps=1000,
151
+ )
152
+
153
+ # Train
154
+ trainer = Trainer(
155
+ model=model,
156
+ args=training_args,
157
+ train_dataset=tokenized['train']
158
+ )
159
+
160
+ trainer.train()
161
+ ```
162
+
163
+ ### Analyze text statistics
164
+
165
+ ```python
166
+ from datasets import load_dataset
167
+ import numpy as np
168
+
169
+ corpus = load_dataset("contextlab/dickens-corpus")
170
+
171
+ # Calculate statistics
172
+ lengths = [len(book['text']) for book in corpus['train']]
173
+
174
+ print(f"Books: {len(lengths)}")
175
+ print(f"Total characters: {sum(lengths):,}")
176
+ print(f"Mean length: {np.mean(lengths):,.0f} characters")
177
+ print(f"Std length: {np.std(lengths):,.0f} characters")
178
+ print(f"Min length: {min(lengths):,} characters")
179
+ print(f"Max length: {max(lengths):,} characters")
180
+ ```
181
+
182
+ ## Dataset Creation
183
+
184
+ ### Source Data
185
+
186
+ All texts sourced from [Project Gutenberg](https://www.gutenberg.org/), a library of over 70,000 free eBooks in the public domain.
187
+
188
+ **Project Gutenberg Links:**
189
+ - Books identified by Gutenberg ID numbers (filenames)
190
+ - Example: `54.txt` corresponds to https://www.gutenberg.org/ebooks/54
191
+ - All works are in the public domain
192
+
193
+ ### Preprocessing Pipeline
194
+
195
+ The raw Project Gutenberg texts underwent the following preprocessing:
196
+
197
+ 1. **Header/footer removal:** Project Gutenberg license text and metadata removed
198
+ 2. **Lowercase conversion:** All text converted to lowercase for stylometry
199
+ 3. **Chapter heading removal:** Chapter titles and numbering removed
200
+ 4. **Non-narrative text removal:** Tables of contents, dedications, etc. removed
201
+ 5. **Encoding normalization:** Converted to UTF-8
202
+ 6. **Structure preservation:** Paragraph breaks and punctuation maintained
203
+
204
+ **Why lowercase?** Stylometric analysis focuses on word choice, syntax, and style rather than capitalization patterns. Lowercase normalization removes this variable.
205
+
206
+ **Preprocessing code:** Available at https://github.com/ContextLab/llm-stylometry
207
+
208
+ ## Considerations for Using This Dataset
209
+
210
+ ### Known Limitations
211
+
212
+ - **Historical language:** Reflects Victorian England vocabulary, grammar, and cultural context
213
+ - **Lowercase only:** All text converted to lowercase (not suitable for case-sensitive analysis)
214
+ - **Incomplete corpus:** May not include all of Charles Dickens's writings (only public domain works on Gutenberg)
215
+ - **Cleaning artifacts:** Some formatting irregularities may remain from Gutenberg source
216
+ - **Public domain only:** Limited to works published before copyright restrictions
217
+
218
+ ### Intended Use Cases
219
+
220
+ - **Stylometry research:** Authorship attribution, style analysis
221
+ - **Language modeling:** Training author-specific models
222
+ - **Literary analysis:** Computational study of Charles Dickens's writing
223
+ - **Historical NLP:** Victorian England language patterns
224
+ - **Educational:** Teaching computational text analysis
225
+
226
+ ### Out-of-Scope Uses
227
+
228
+ - Case-sensitive text analysis
229
+ - Modern language applications
230
+ - Factual information retrieval
231
+ - Complete scholarly editions (use academic sources)
232
+
233
+ ## Citation
234
+
235
+ If you use this dataset in your research, please cite:
236
+
237
+ ```bibtex
238
+ @article{StroEtal25,
239
+ title={A Stylometric Application of Large Language Models},
240
+ author={Stropkay, Harrison F. and Chen, Jiayi and Jabelli, Mohammad J. L. and Rockmore, Daniel N. and Manning, Jeremy R.},
241
+ journal={arXiv preprint arXiv:XXXX.XXXXX},
242
+ year={2025}
243
+ }
244
+ ```
245
+
246
+ ## Additional Information
247
+
248
+ ### Dataset Curator
249
+
250
+ [ContextLab](https://www.context-lab.com/), Dartmouth College
251
+
252
+ ### Licensing
253
+
254
+ MIT License - Free to use with attribution
255
+
256
+ ### Contact
257
+
258
+ - **Paper & Code:** https://github.com/ContextLab/llm-stylometry
259
+ - **Issues:** https://github.com/ContextLab/llm-stylometry/issues
260
+ - **Contact:** Jeremy R. Manning (jeremy.r.manning@dartmouth.edu)
261
+
262
+ ### Related Resources
263
+
264
+ Explore datasets for all 8 authors in the study:
265
+ - [Jane Austen](https://huggingface.co/datasets/contextlab/austen-corpus)
266
+ - [L. Frank Baum](https://huggingface.co/datasets/contextlab/baum-corpus)
267
+ - [Charles Dickens](https://huggingface.co/datasets/contextlab/dickens-corpus)
268
+ - [F. Scott Fitzgerald](https://huggingface.co/datasets/contextlab/fitzgerald-corpus)
269
+ - [Herman Melville](https://huggingface.co/datasets/contextlab/melville-corpus)
270
+ - [Ruth Plumly Thompson](https://huggingface.co/datasets/contextlab/thompson-corpus)
271
+ - [Mark Twain](https://huggingface.co/datasets/contextlab/twain-corpus)
272
+ - [H.G. Wells](https://huggingface.co/datasets/contextlab/wells-corpus)
273
+
274
+ ### Trained Models
275
+
276
+ Author-specific GPT-2 models trained on these corpora will be available after training completes:
277
+ - https://huggingface.co/contextlab (browse all models)