File size: 15,432 Bytes
dc4e6da
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
---
title: DocGenie API
emoji: πŸ“„
colorFrom: blue
colorTo: indigo
sdk: docker
app_port: 7860
pinned: false
---

# DocGenie

## Project structure
The source code under /docgenie is split into three parts:
- **generation**: Code responsible for synthesizing datasets.
- **evaluation**: Code responsible for training models on original/synthetic data and evaluating them. Also contains code to load these datasets.
- **analyzation**: Code responsible for analyting original/synthetic data, e.g. clustering, LayoutFID scores etc.

## Setting up project dependencies
Install uv astral (https://docs.astral.sh/uv/getting-started/installation/)
```
curl -LsSf https://astral.sh/uv/install.sh | sh
```

Install dependencies (set uv cache dir to appropriate dir in your data folder as default home cache dir has limited space):
```
uv sync --cache-dir /data/proj/$USER/.cache/uv/
``` 

Source the uv environment
```
source .venv/bin/active
```

Or, directly run commands with uv run
```
uv run python /path/to/script
```

## Setting up dependencies for generation pipeline
Install playwright chromium by running
```
playwright install chromium
```

and also download chromium for PDF conversion:
```
wget -O chrome.zip "https://download-chromium.appspot.com/dl/Linux_x64?type=snapshots"
unzip chrome.zip
```

Add Chromium to your PATH
```
echo "export PATH=\"$(pwd)/chrome-linux:\$PATH\"" >> ~/.bashrc
```

Reload your shell
```
source ~/.bashrc
```

Verify installation
```
chrome --version
```

# Synthetization Pipeline
- Set the env variable ANTHROPIC_API_KEY with your Anthropic API Key
- Create a new syn dataset definition file in data/syn_dataset_definitions. For a template refer to docvqa-test.yaml
- Execute 'docgenie/generation/main.py SynDsDefFname' where SynDsDefFname is the filename of the syn dataset definition without extension
- Data will be stored in 'data/datasets/SynDsName' where SynDsName is field 'name' in the syn dataset definition.
- Final PDFs will be stored in subdirectory pdf_final
  - Handwriting synthesis is currently not implemented, so the final PDFs will be missing text. To see the PDF with the text which has to be replaced by handwriting see PDFs in sub directory pdf_pass1
  - Visual element insertion is currently not implemented

# DocVQA Handwriting Generation

A toolkit for generating synthetic handwriting images for document visual question answering (DocVQA) tasks. This project provides scripts to generate, process, and enhance handwritten text overlays on documents using either font-based rendering or diffusion-based deep learning models.

## Overview

This repository contains tools to:
- Generate synthetic handwriting from bounding box specifications
- Apply post-processing effects (blur, antialiasing) for realistic rendering
- Support multiple generation backends (font-based, diffusion model)
- Handle word segmentation and concatenation for long words
- Maintain consistent author styles across documents

## Project Structure

```
docvqa_handwriting_generation/
β”œβ”€β”€ model/                      # Model architecture and training utilities
β”‚   β”œβ”€β”€ text_encoder.py
β”‚   β”œβ”€β”€ tokenizer.py
β”‚   β”œβ”€β”€ train_hugging.py
β”‚   └── experiments/
β”‚       └── hf_conditional_latent/
β”‚           β”œβ”€β”€ config.yaml
β”‚           β”œβ”€β”€ writer_id_map.json
β”‚           β”œβ”€β”€ checkpoints/
β”‚           └── cached_vae/
β”œβ”€β”€ scripts/                    # Generation and evaluation scripts
β”‚   β”œβ”€β”€ generate_handwriting_diffusion_raw.py
β”‚   β”œβ”€β”€ generate_handwriting_resized.py
β”‚   β”œβ”€β”€ generate_writer_style_eval.py
β”‚   └── add_handwriting_blur.py
└── requirements.txt
```
## Directory Structure for Hnadwritten Text Images

```
data/
β”œβ”€β”€ datasets/                    
β”‚   β”œβ”€β”€ synthesized_datasets/  
β”‚   β”œβ”€β”€β”€β”€β”€ DocVQA-XYZ-Dataset/        
β”‚   │──────── handwriting_raw_tokens/     # Directory containing folders for each doc which inturn contains images                
β”‚   │────────────────7cd-ef-xy456-xxx-xxx_0/  # Directory for doc named as 7cd-ef-xy456-xxx-xxx_0 etc.
β”‚   │──────────────────────── hw01_0.png      # Images
β”‚   │──────────────────────── hw01_1.png
β”‚   │────────────────────────     .
β”‚   │────────────────────────     .
β”‚   │────────────────────────     .
β”‚   │─────────────────32xc-ef-xy456-xxx-xxx_0/    
β”‚   │──────────────────────── hw01_0.png
β”‚   │──────────────────────── hw01_1.png
β”‚   │────────────────────────     .
β”‚   │────────────────────────     .
β”‚   │────────────────────────     .
```

Dataset archives unpack directly into the repository root (e.g. `docvqa-handwritten-sizes4/`, `docvqa-test/`, `docvqa-viselems/`).

## Installation

### Requirements

- Python 3.8+
- PyTorch (for diffusion backend)
- Other dependencies listed in `requirements.txt`

### Setup

1. Clone the repository:
```bash
git clone <repository-url>
cd docvqa_handwriting_generation
```

2. Install dependencies:
TODO: update pyproject.toml for dependencies, we now use UV
```bash
pip install -r requirements.txt
```

3. Download or train the diffusion model:

**Pre-trained Models:** `https://drive.google.com/drive/folders/1ujMRnW3avELk-oEhlrVeQ2oTd2j7nM77?usp=sharing`

Expected structure after extraction:
```
model/
└── experiments/
    └── hf_conditional_latent/
        β”œβ”€β”€ config.yaml              # Model configuration
        β”œβ”€β”€ writer_id_map.json       # Writer ID to index mapping
        β”œβ”€β”€ cached_vae/             # VAE decoder (auto-downloaded on first use)
        β”‚   β”œβ”€β”€ config.json
        β”‚   └── diffusion_pytorch_model.safetensors
        └── checkpoints/
            β”œβ”€β”€ latest.pt            # Latest checkpoint
            └── checkpoint-####.pt   # Epoch checkpoints
```

**Note:** The VAE decoder will be automatically downloaded from HuggingFace on first use and cached locally.

4. Download datasets (optional, for testing):

**DocVQA Handwritten Dataset:** `https://drive.google.com/drive/folders/1ujMRnW3avELk-oEhlrVeQ2oTd2j7nM77?usp=sharing`

## Usage

### 1. Diffusion-Based Handwriting Generation

Generate handwriting tokens using a conditional diffusion model with writer style control and intelligent word splitting:

```bash
python scripts/generate_handwriting_diffusion_raw.py \
    --input-dir data/docvqa-handwritten-sizes4/handwriting_bbox \
    --output-dir output/handwriting_raw_tokens \
    --run-dir model/experiments/hf_conditional_latent \
    --checkpoint latest.pt \
    --steps 30 \
    --split-length 7 \
    --batch-size 8 \
    --temperature 1.0 \
    --device cuda
```

**Key Features:**

**Intelligent Word Splitting:**
- Words longer than `--split-length` are automatically split into segments
- Example: `--split-length 7` β†’ "generation" becomes "generat" + "ion"
- Segments are generated separately and stitched horizontally
- Set `--split-length 0` to disable splitting

**Writer Style Control:**
- Each author gets a consistent style ID per document
- Style IDs are derived from the model's trained writer embeddings
- Maintains style consistency across all words from the same author

**Conditional Diffusion:**
- Uses HuggingFace UNet2DConditionModel with cross-attention
- Character-level text encoding via transformer
- VAE latent space generation (auto-downloads stabilityai/sd-vae-ft-mse)
- Configurable sampling temperature for quality/diversity tradeoff

**Arguments:**
- `--run-dir`: Path to model experiment directory
- `--checkpoint`: Checkpoint filename (default: `latest.pt`)
- `--steps`: Number of diffusion steps (default: 30; more = better quality)
- `--split-length`: Max word length before splitting (default: 7)
- `--temperature`: Sampling temperature (0.7-0.9 = conservative, 1.0 = standard, 1.1-1.3 = creative)
- `--batch-size`: Batch size for GPU efficiency (default: 8)
- `--use-ema`: Use EMA weights if available in checkpoint

**Output:** 
- Images: `<output-dir>/<json_stem>/hw<id>_<word_no>.png`
- Mapping: `<output-dir>/raw_token_map.json`

**Output Features:**
- RGBA format with transparent backgrounds
- Tight cropping to handwriting content
- Word segments automatically stitched horizontally
- Baseline-aligned concatenation for natural appearance

### 2. Resized Handwriting Generation

Generate handwriting scaled to fit specific bounding boxes:

```bash
python scripts/generate_handwriting_resized.py \
    --input-dir data/syn_docvqa/handwriting_bbox \
    --output-dir output/handwriting_rendered \
    --backend font \
    --fonts-dir assets/fonts \
    --max-workers 8
```

**Backends:**
- `font`: Pillow-based pseudo-handwriting (fast, no GPU needed)
- `diffusion`: Deep learning model (requires GPU, model artifacts)

**Output:**
- Images: `<output-dir>/<json_stem>__<hw_id>__seg<index>.png`
- Mapping: `<output-dir>/handwriting_image_map.json`

### 3. Post-Processing with Blur

Add realistic blur and anti-aliasing to generated handwriting:

```bash
python scripts/add_handwriting_blur.py \
    --input-root output/handwriting_raw_tokens \
    --output-root output/handwriting_raw_tokens_blur \
    --mapping-json output/handwriting_raw_tokens/raw_token_map.json \
    --append-mapping \
    --radius-min 0.6 \
    --radius-max 1.8 \
    --antialias
```

**Features:**
- Gaussian blur with configurable radius
- Optional downscale+upscale anti-aliasing
- Advanced edge refinement (erosion, dilation, unsharp mask)
- Updates mapping JSON with blurred image paths
- Supports in-place or mirror directory output

### 4. Writer Style Evaluation Exports

Generate per-writer evaluation samples with a curated word list and DPM-Solver++ sampling:

```bash
python scripts/generate_writer_style_eval.py \
    --run-dir model/experiments/hf_conditional_latent \
    --checkpoint latest.pt \
    --output-dir writer_eval \
    --max-words 48 \
    --batch-size 12 \
    --num-steps 30 \
    --temperature 0.7 \
    --device cuda
```

**Outputs:**
- PNG samples saved under `<output-dir>/writer_XXXX/`
- `<output-dir>/writer_style_manifest.json` summarizing words, writers, and generation metadata

## Input Format

### Handwriting Bbox JSON

Input JSON files specify bounding boxes and text for handwriting generation:

```json
[
  {
    "id": "hw0",
    "text": "Example Text",
    "author-id": "author1",
    "bboxes": [
      "110.69,124.79,161.76,143.41,Example,22,0,0",
      "166.85,124.79,204.83,143.41,Text,22,0,1"
    ]
  }
]
```

**Bbox format:** `x1,y1,x2,y2,text,block_no,line_no,word_no`
- Coordinates are floats
- Last 3 values are indices for grouping (block, line, word)
- Text can contain any characters (including commas)

## Key Features

### Intelligent Word Splitting
- Automatically splits words exceeding `--split-length` characters
- Example: "generation" (10 chars) β†’ "generat" + "ion" (with split_length=7)
- Segments generated independently with same style
- Stitched horizontally with baseline alignment
- Configurable via `--split-length` parameter (0 = no splitting)

### Writer Style Consistency
- Each author ID gets consistent style per document
- Style derived from trained writer embeddings in model
- Falls back to deterministic hashing for unknown authors
- Reproducible with same `--seed` value

### Conditional Text Generation
- Character-level transformer text encoder
- Cross-attention conditioning in UNet
- VAE latent space generation (64Γ—256 latent β†’ decoded to full resolution)
- Temperature control for quality/diversity tradeoff

### Batched GPU Generation
- Process multiple segments in parallel
- Configurable batch size for memory optimization
- Progress tracking with tqdm

### Output Quality
- RGBA format with transparent backgrounds
- Tight cropping to ink extents
- Otsu thresholding for clean binarization
- Baseline-aligned word segment stitching
- Version-controlled output mappings

## Advanced Options

### Diffusion Generation Parameters
- `--steps`: Number of diffusion steps (default: 30; more = higher quality, slower)
  - Quick preview: 15-20 steps
  - Production: 30-50 steps
- `--split-length`: Maximum word length before splitting (default: 7; 0 = no splitting)
- `--temperature`: Sampling temperature (default: 1.0)
  - 0.7-0.9: Conservative, cleaner output
  - 1.0: Standard sampling
  - 1.1-1.3: Creative, more diverse
- `--batch-size`: Batch size for GPU processing (default: 8)
- `--seed`: Random seed for reproducibility (default: 42)
- `--use-ema`: Use EMA weights if available (improves quality)

### Blur Parameters
- `--radius`: Fixed blur radius (overrides min/max)
- `--radius-min/max`: Random uniform blur range
- `--antialias`: Enable downscale+upscale smoothing
- `--scale-factor`: Downscale factor for antialiasing (default: 0.75)

## Troubleshooting

### CUDA Out of Memory
- Reduce `--batch-size` to 1-4
- Reduce `--steps` (try 20-30)
- Use CPU: `--device cpu` (much slower)
- Close other GPU applications

### Missing Model Files
Ensure you have the trained model checkpoint in:
```
model/experiments/hf_conditional_latent/
β”œβ”€β”€ config.yaml
β”œβ”€β”€ writer_id_map.json
└── checkpoints/
    └── latest.pt
```

The VAE decoder will be auto-downloaded on first use to:
```
model/experiments/hf_conditional_latent/cached_vae/
```

### Import Errors
Make sure all dependencies are installed:
```bash
pip install -r requirements.txt
```

Ensure model components are accessible:
```bash
# From project root
python -c "from model.text_encoder import TextEncoder; from model.tokenizer import CharTokenizer"
```

### Style Not Working
Check that `writer_id_map.json` exists in your run directory and contains the author IDs from your dataset.

## Model Architecture

### Components
- **Text Encoder**: Character-level transformer (256-dim, 6 layers, 8 heads)
- **UNet**: HuggingFace UNet2DConditionModel with cross-attention
- **VAE**: Stable Diffusion VAE (stabilityai/sd-vae-ft-mse)
- **Tokenizer**: Character-level with special tokens (PAD, UNK, SOS, EOS)

### Training
Refer to `model/train_hugging.py` and `training/config_latent.yaml` for training configuration.

## Downloads

### Pre-trained Model
**Required for diffusion-based generation**
- Download Link: `https://drive.google.com/drive/folders/1ujMRnW3avELk-oEhlrVeQ2oTd2j7nM77?usp=sharing`
- Extract to: `model/experiments/`
- Required files:
  - `config.yaml` - Model configuration
  - `writer_id_map.json` - Writer style mappings
  - `checkpoints/latest.pt` - Model weights

### Datasets
**Optional - for testing and examples**
- DocVQA Handwritten Dataset: `https://drive.google.com/drive/folders/1ujMRnW3avELk-oEhlrVeQ2oTd2j7nM77?usp=sharing`
- Extract to: `data/`

## Citation


## License

[Specify your license here]

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.