|
|
--- |
|
|
language: en |
|
|
license: mit |
|
|
task_categories: |
|
|
- text-generation |
|
|
tags: |
|
|
- stylometry |
|
|
- authorship-attribution |
|
|
- literary-analysis |
|
|
- baum |
|
|
- classic-literature |
|
|
- project-gutenberg |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
pretty_name: L. Frank Baum Corpus |
|
|
--- |
|
|
|
|
|
# ContextLab L. Frank Baum Corpus |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
This dataset contains works of **L. Frank Baum** (1856-1919), preprocessed for computational stylometry research. The texts were sourced from [Project Gutenberg](https://www.gutenberg.org/) and cleaned for use in the paper ["A Stylometric Application of Large Language Models"](https://arxiv.org/abs/2510.21958) (Stropkay et al., 2025). |
|
|
|
|
|
The corpus includes **14 books** by L. Frank Baum, including The Wonderful Wizard of Oz series (14 books). All text has been converted to **lowercase** and cleaned of Project Gutenberg headers, footers, and chapter headings to focus on the author's prose style. |
|
|
|
|
|
### Quick Stats |
|
|
|
|
|
- **Books:** 14 |
|
|
- **Total characters:** 3,354,451 |
|
|
- **Total words:** 617,021 (approximate) |
|
|
- **Average book length:** 239,603 characters |
|
|
- **Format:** Plain text (.txt files) |
|
|
- **Language:** English (lowercase) |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
### Books Included |
|
|
|
|
|
Each `.txt` file contains the complete text of one book: |
|
|
|
|
|
| File | Title | |
|
|
|------|-------| |
|
|
| `22566.txt` | The Emerald City of Oz | |
|
|
| `26624.txt` | The Patchwork Girl of Oz | |
|
|
| `30852.txt` | Tik-Tok of Oz | |
|
|
| `33361.txt` | The Scarecrow of Oz | |
|
|
| `39868.txt` | Rinkitink in Oz | |
|
|
| `41667.txt` | The Lost Princess of Oz | |
|
|
| `43936.txt` | The Tin Woodman of Oz | |
|
|
| `50194.txt` | The Magic of Oz | |
|
|
| `52176.txt` | Glinda of Oz | |
|
|
| `54.txt` | The Wonderful Wizard of Oz | |
|
|
| `955.txt` | The Marvelous Land of Oz | |
|
|
| `957.txt` | Ozma of Oz | |
|
|
| `958.txt` | Dorothy and the Wizard in Oz | |
|
|
| `959.txt` | The Road to Oz | |
|
|
|
|
|
|
|
|
### Data Fields |
|
|
|
|
|
- **text:** Complete book text (lowercase, cleaned) |
|
|
- **filename:** Project Gutenberg ID |
|
|
|
|
|
### Data Format |
|
|
|
|
|
All files are plain UTF-8 text: |
|
|
- Lowercase characters only |
|
|
- Punctuation and structure preserved |
|
|
- Paragraph breaks maintained |
|
|
- No chapter headings or non-narrative text |
|
|
|
|
|
## Usage |
|
|
|
|
|
### Load with `datasets` library |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load entire corpus |
|
|
corpus = load_dataset("contextlab/baum-corpus") |
|
|
|
|
|
# Iterate through books |
|
|
for book in corpus['train']: |
|
|
print(f"Book length: {len(book['text']):,} characters") |
|
|
print(book['text'][:200]) # First 200 characters |
|
|
print() |
|
|
``` |
|
|
|
|
|
### Load specific file |
|
|
|
|
|
```python |
|
|
# Load single book by filename |
|
|
dataset = load_dataset( |
|
|
"contextlab/baum-corpus", |
|
|
data_files="54.txt" # Specific Gutenberg ID |
|
|
) |
|
|
|
|
|
text = dataset['train'][0]['text'] |
|
|
print(f"Loaded {len(text):,} characters") |
|
|
``` |
|
|
|
|
|
### Download files directly |
|
|
|
|
|
```python |
|
|
from huggingface_hub import hf_hub_download |
|
|
|
|
|
# Download one book |
|
|
file_path = hf_hub_download( |
|
|
repo_id="contextlab/baum-corpus", |
|
|
filename="54.txt", |
|
|
repo_type="dataset" |
|
|
) |
|
|
|
|
|
with open(file_path, 'r') as f: |
|
|
text = f.read() |
|
|
``` |
|
|
|
|
|
### Use for training language models |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments |
|
|
|
|
|
# Load corpus |
|
|
corpus = load_dataset("contextlab/baum-corpus") |
|
|
|
|
|
# Combine all books into single text |
|
|
full_text = " ".join([book['text'] for book in corpus['train']]) |
|
|
|
|
|
# Tokenize |
|
|
tokenizer = GPT2Tokenizer.from_pretrained("gpt2") |
|
|
|
|
|
def tokenize_function(examples): |
|
|
return tokenizer(examples['text'], truncation=True, max_length=1024) |
|
|
|
|
|
tokenized = corpus.map(tokenize_function, batched=True, remove_columns=['text']) |
|
|
|
|
|
# Initialize model |
|
|
model = GPT2LMHeadModel.from_pretrained("gpt2") |
|
|
|
|
|
# Set up training |
|
|
training_args = TrainingArguments( |
|
|
output_dir="./results", |
|
|
num_train_epochs=10, |
|
|
per_device_train_batch_size=8, |
|
|
save_steps=1000, |
|
|
) |
|
|
|
|
|
# Train |
|
|
trainer = Trainer( |
|
|
model=model, |
|
|
args=training_args, |
|
|
train_dataset=tokenized['train'] |
|
|
) |
|
|
|
|
|
trainer.train() |
|
|
``` |
|
|
|
|
|
### Analyze text statistics |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
import numpy as np |
|
|
|
|
|
corpus = load_dataset("contextlab/baum-corpus") |
|
|
|
|
|
# Calculate statistics |
|
|
lengths = [len(book['text']) for book in corpus['train']] |
|
|
|
|
|
print(f"Books: {len(lengths)}") |
|
|
print(f"Total characters: {sum(lengths):,}") |
|
|
print(f"Mean length: {np.mean(lengths):,.0f} characters") |
|
|
print(f"Std length: {np.std(lengths):,.0f} characters") |
|
|
print(f"Min length: {min(lengths):,} characters") |
|
|
print(f"Max length: {max(lengths):,} characters") |
|
|
``` |
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Source Data |
|
|
|
|
|
All texts sourced from [Project Gutenberg](https://www.gutenberg.org/), a library of over 70,000 free eBooks in the public domain. |
|
|
|
|
|
**Project Gutenberg Links:** |
|
|
- Books identified by Gutenberg ID numbers (filenames) |
|
|
- Example: `54.txt` corresponds to https://www.gutenberg.org/ebooks/54 |
|
|
- All works are in the public domain |
|
|
|
|
|
### Preprocessing Pipeline |
|
|
|
|
|
The raw Project Gutenberg texts underwent the following preprocessing: |
|
|
|
|
|
1. **Header/footer removal:** Project Gutenberg license text and metadata removed |
|
|
2. **Lowercase conversion:** All text converted to lowercase for stylometry |
|
|
3. **Chapter heading removal:** Chapter titles and numbering removed |
|
|
4. **Non-narrative text removal:** Tables of contents, dedications, etc. removed |
|
|
5. **Encoding normalization:** Converted to UTF-8 |
|
|
6. **Structure preservation:** Paragraph breaks and punctuation maintained |
|
|
|
|
|
**Why lowercase?** Stylometric analysis focuses on word choice, syntax, and style rather than capitalization patterns. Lowercase normalization removes this variable. |
|
|
|
|
|
**Preprocessing code:** Available at https://github.com/ContextLab/llm-stylometry |
|
|
|
|
|
## Considerations for Using This Dataset |
|
|
|
|
|
### Known Limitations |
|
|
|
|
|
- **Historical language:** Reflects late 19th to early 20th century America vocabulary, grammar, and cultural context |
|
|
- **Lowercase only:** All text converted to lowercase (not suitable for case-sensitive analysis) |
|
|
- **Incomplete corpus:** May not include all of L. Frank Baum's writings (only public domain works on Gutenberg) |
|
|
- **Cleaning artifacts:** Some formatting irregularities may remain from Gutenberg source |
|
|
- **Public domain only:** Limited to works published before copyright restrictions |
|
|
|
|
|
### Intended Use Cases |
|
|
|
|
|
- **Stylometry research:** Authorship attribution, style analysis |
|
|
- **Language modeling:** Training author-specific models |
|
|
- **Literary analysis:** Computational study of L. Frank Baum's writing |
|
|
- **Historical NLP:** late 19th to early 20th century America language patterns |
|
|
- **Educational:** Teaching computational text analysis |
|
|
|
|
|
### Out-of-Scope Uses |
|
|
|
|
|
- Case-sensitive text analysis |
|
|
- Modern language applications |
|
|
- Factual information retrieval |
|
|
- Complete scholarly editions (use academic sources) |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset in your research, please cite: |
|
|
|
|
|
```bibtex |
|
|
@article{StroEtal25, |
|
|
title={A Stylometric Application of Large Language Models}, |
|
|
author={Stropkay, Harrison F. and Chen, Jiayi and Jabelli, Mohammad J. L. and Rockmore, Daniel N. and Manning, Jeremy R.}, |
|
|
journal={arXiv preprint arXiv:2510.21958}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
## Additional Information |
|
|
|
|
|
### Dataset Curator |
|
|
|
|
|
[ContextLab](https://www.context-lab.com/), Dartmouth College |
|
|
|
|
|
### Licensing |
|
|
|
|
|
MIT License - Free to use with attribution |
|
|
|
|
|
### Contact |
|
|
|
|
|
- **Paper & Code:** https://github.com/ContextLab/llm-stylometry |
|
|
- **Issues:** https://github.com/ContextLab/llm-stylometry/issues |
|
|
- **Contact:** Jeremy R. Manning (jeremy.r.manning@dartmouth.edu) |
|
|
|
|
|
### Related Resources |
|
|
|
|
|
Explore datasets for all 8 authors in the study: |
|
|
- [Jane Austen](https://huggingface.co/datasets/contextlab/austen-corpus) |
|
|
- [L. Frank Baum](https://huggingface.co/datasets/contextlab/baum-corpus) |
|
|
- [Charles Dickens](https://huggingface.co/datasets/contextlab/dickens-corpus) |
|
|
- [F. Scott Fitzgerald](https://huggingface.co/datasets/contextlab/fitzgerald-corpus) |
|
|
- [Herman Melville](https://huggingface.co/datasets/contextlab/melville-corpus) |
|
|
- [Ruth Plumly Thompson](https://huggingface.co/datasets/contextlab/thompson-corpus) |
|
|
- [Mark Twain](https://huggingface.co/datasets/contextlab/twain-corpus) |
|
|
- [H.G. Wells](https://huggingface.co/datasets/contextlab/wells-corpus) |
|
|
|