|
|
--- |
|
|
license: cc0-1.0 |
|
|
task_categories: |
|
|
- text-generation |
|
|
- conversational |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- fdr |
|
|
- franklin-roosevelt |
|
|
- historical |
|
|
- leadership |
|
|
- presidential |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
dataset_info: |
|
|
features: |
|
|
- name: messages |
|
|
list: |
|
|
- name: content |
|
|
dtype: string |
|
|
- name: role |
|
|
dtype: string |
|
|
- name: source_file |
|
|
dtype: string |
|
|
- name: chunk_id |
|
|
dtype: string |
|
|
- name: timestamp |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 2196098 |
|
|
num_examples: 2001 |
|
|
download_size: 300091 |
|
|
dataset_size: 2196098 |
|
|
--- |
|
|
|
|
|
# FDR Training Corpus |
|
|
|
|
|
This dataset contains training material for creating Franklin Delano Roosevelt (FDR) language models and conversational agents. |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
**Purpose**: Training data for LoRA fine-tuning to capture FDR's speaking style, vocabulary, and historical perspectives. |
|
|
|
|
|
**Content**: Speeches, letters, fireside chats, press conferences, and other public communications from FDR's presidency (1933-1945). |
|
|
|
|
|
**License**: CC0-1.0 (Public Domain) - All content is from historical public domain sources. |
|
|
|
|
|
## Usage |
|
|
|
|
|
This dataset is designed for: |
|
|
- LoRA fine-tuning of language models |
|
|
- Training conversational AI agents with FDR's personality |
|
|
- Historical language model research |
|
|
- Educational applications |
|
|
|
|
|
## Data Structure |
|
|
|
|
|
``` |
|
|
data/ |
|
|
├── speeches/ # Major speeches and addresses |
|
|
├── fireside_chats/ # Radio addresses to the nation |
|
|
├── letters/ # Personal and official correspondence |
|
|
├── press_conferences/ # Q&A sessions with press |
|
|
└── misc/ # Other historical documents |
|
|
``` |
|
|
|
|
|
## Training Recommendations |
|
|
|
|
|
**LoRA Configuration:** |
|
|
- Rank: 16 |
|
|
- Alpha: 32 |
|
|
- Dropout: 0.1 |
|
|
- Target modules: q_proj, k_proj, v_proj, o_proj |
|
|
|
|
|
**Training Parameters:** |
|
|
- Learning rate: 3e-4 |
|
|
- Epochs: 3-5 |
|
|
- Batch size: 2-4 |
|
|
- Max sequence length: 512 |
|
|
|
|
|
## Citation |
|
|
|
|
|
When using this dataset, please acknowledge: |
|
|
- The historical nature of the content (1933-1945) |
|
|
- Public domain status of source materials |
|
|
- Purpose for educational/research use |
|
|
|
|
|
Generated on: 2025-09-08 |