Update README.md
Browse files
README.md
CHANGED
|
@@ -1,9 +1,67 @@
|
|
|
|
|
| 1 |
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
|
| 7 |
|
| 8 |
-
TinyTim is a version of TinyLlama that has been finetuned on text from James Joyce's Finnegan's Wake.
|
| 9 |
-
Hope other Joyce fans find it useful or entertaining!
|
|
|
|
| 1 |
+
# TinyTimV1: Fine-tuning TinyLlama on Finnegan's Wake
|
| 2 |
|
| 3 |
+
A project exploring the fine-tuning of TinyLlama-1.1B on James Joyce's *Finnegan's Wake* to generate Joyce-inspired text.
|
| 4 |
+
|
| 5 |
+
## Overview
|
| 6 |
+
|
| 7 |
+
This project fine-tunes the TinyLlama-1.1B-Chat model on the complete text of James Joyce's *Finnegan's Wake*, creating a language model capable of generating text in Joyce's distinctive experimental style. The model learns to replicate the complex wordplay, neologisms, and stream-of-consciousness narrative techniques characteristic of Joyce's final work.
|
| 8 |
+
|
| 9 |
+
## Files
|
| 10 |
+
|
| 11 |
+
- `process_wake.py` - Preprocesses the raw text, removes page numbers, and splits into manageable chunks
|
| 12 |
+
- `fine_tune_joyce.py` - Main training script using HuggingFace Transformers
|
| 13 |
+
- `text_gen.py` - Text generation script for the fine-tuned model
|
| 14 |
+
- `finn_wake.txt` - Complete text of Finnegan's Wake (1.51 MB)
|
| 15 |
+
- `finn_wake.csv` - Processed dataset in CSV format
|
| 16 |
+
- `finn_wake_dataset/` - Tokenized dataset directory
|
| 17 |
+
|
| 18 |
+
## Usage
|
| 19 |
+
|
| 20 |
+
### 1. Data Preprocessing
|
| 21 |
+
```bash
|
| 22 |
+
python process_wake.py
|
| 23 |
+
```
|
| 24 |
+
This removes page numbers and splits the text into 100-word chunks for training.
|
| 25 |
+
2. Fine-tuning
|
| 26 |
+
|
| 27 |
+
```bash
|
| 28 |
+
python fine_tune_joyce.py
|
| 29 |
+
```
|
| 30 |
+
Fine-tunes TinyLlama on the processed dataset for 3 epochs with CPU training.
|
| 31 |
+
3. Text Generation
|
| 32 |
+
```bash
|
| 33 |
+
python text_gen.py
|
| 34 |
+
```
|
| 35 |
+
Generates Joyce-inspired text using the fine-tuned model.
|
| 36 |
+
|
| 37 |
+
Model Details
|
| 38 |
+
|
| 39 |
+
Base Model: TinyLlama-1.1B-Chat-v1.0
|
| 40 |
+
Training Data: Finnegan's Wake (~1.5MB text)
|
| 41 |
+
Training Parameters:
|
| 42 |
+
|
| 43 |
+
3 epochs
|
| 44 |
+
Batch size: 1
|
| 45 |
+
Max sequence length: 128 tokens
|
| 46 |
+
Temperature: 0.7
|
| 47 |
+
Top-k: 50, Top-p: 0.95
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
Example Output
|
| 52 |
+
Input: "ae left to go to ireland and found a fairy"
|
| 53 |
+
The model generates text continuing in Joyce's experimental style with invented words, Irish references, and complex linguistic play.
|
| 54 |
+
Requirements
|
| 55 |
+
transformers
|
| 56 |
+
datasets
|
| 57 |
+
pandas
|
| 58 |
+
torch
|
| 59 |
+
Installation
|
| 60 |
+
bashpip install transformers datasets pandas torch
|
| 61 |
+
Notes
|
| 62 |
+
|
| 63 |
+
Training was performed on CPU due to resource constraints
|
| 64 |
+
Model checkpoints saved every 500 steps
|
| 65 |
+
Resume training supported from checkpoints
|
| 66 |
|
| 67 |
|
|
|
|
|
|