GlyphPrinter / README.md
TribeRinb's picture
Upload README.md with huggingface_hub
56330e7 verified
<h1 align="center">GlyphPrinter: Region-Grouped Direct Preference Optimization for Glyph-Accurate Visual Text Rendering</h1>
<div align="center">
<a href=''><img src='https://img.shields.io/badge/arXiv-2603.02138-b31b1b.svg'></a> &nbsp;&nbsp;&nbsp;&nbsp;
<a href='https://henghuiding.com/GlyphPrinter'><img src='https://img.shields.io/badge/Project-Page-Green'></a> &nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://huggingface.co/FudanCVL/GlyphPrinter"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Weights-HF-orange"></a> &nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://huggingface.co/datasets/FudanCVL/GlyphCorrector"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-HF-orange"></a> &nbsp;&nbsp;&nbsp;&nbsp;
<!--
<a href=""><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Bench-HF-orange"></a> &nbsp;&nbsp;&nbsp;&nbsp;
<a href=""><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Demo%20-HF-orange"></a> &nbsp;&nbsp;&nbsp;&nbsp; -->
</div>
<p align="center"><b>Xincheng Shuai<sup>1,*</sup>, Ziye Li<sup>1,*</sup>, Henghui Ding<sup>1,βœ‰</sup>, Dacheng Tao<sup>2</sup></b></p>
<p align="center">* Equal Contribution, βœ‰ Corresponding Author</p>
<p align="center"><sup>1</sup>Fudan University, <sup>2</sup>Nanyang Technological University</p>
## πŸ”₯πŸ”₯πŸ”₯ News
- [2026/03/15] Release the **training code** and **GlyphCorrector dataset**. πŸ€— [GlyphCorrector](https://huggingface.co/datasets/FudanCVL/GlyphCorrector).
- [2026/03/13] Release the **inference code** and **model weights**. πŸ€— [Model Weight](https://huggingface.co/FudanCVL/GlyphPrinter).
- [2026/02/21] GlyphPrinter is accepted to **CVPR 2026**. πŸ‘πŸ‘
---
## 😊 Introduction
![teaser](assets/teaser.png)
**GlyphPrinter** is a preference-based text rendering framework designed to eliminate the reliance on explicit reward models for visual text generation. It addresses the common failure cases in existing T2I models, such as stroke distortions and incorrect glyphs, especially when rendering complex Chinese characters, multilingual text, or out-of-domain symbols.
---
## πŸ”§ Key Features
- **GlyphCorrector Dataset:** A specialized dataset with region-level glyph preference annotations, facilitating the model's ability to learn localized glyph correctness.
- **R-GDPO (Region-Grouped Direct Preference Optimization):** Unlike standard DPO which models global image-level preferences, R-GDPO focuses on local regions where glyph errors typically occur. It optimizes inter- and intra-sample preferences over annotated regions to significantly enhance glyph accuracy.
- **Regional Reward Guidance (RRG):** A novel inference strategy that samples from an optimal distribution with controllable glyph accuracy.
---
## πŸ‘· Pipeline
![pipeline](assets/pipeline.png)
The training of GlyphPrinter consists of two stages:
1. **Stage 1 (Fine-Tuning):** The model is first fine-tuned on multilingual synthetic and realistic text images to establish a strong baseline for text rendering.
2. **Stage 2 (Region-Level Preference Optimization):** The model is optimized using the R-GDPO objective on the GlyphCorrector dataset. This stage aligns model outputs with accurate glyph regions while discouraging incorrect ones, resulting in superior glyph fidelity.
---
## πŸ’» Quick Start
### Configuration
1. Environment setup
```bash
cd GlyphPrinter
conda create -n GlyphPrinter python=3.11.10 -y
conda activate GlyphPrinter
```
2. Requirements installation
```bash
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
pip install --upgrade -r requirements.txt
```
3. Inference
```bash
python app.py
```
Default server port: `7897`.
4. CLI inference without Gradio (directly load conditions from `saved_conditions` directory, you can manually construct the npz-format condition through app.py)
```bash
# list available saved conditions
python3 inference.py --list-conditions
# run inference using the latest condition in saved_conditions/
python3 inference.py \
--prompt "The colorful graffiti font <sks1> printed on the street wall" \
--save-mask
# run inference using a specific condition file
python3 inference.py \
--condition condition_1.npz \
--output-dir outputs_inference
```
## πŸƒ R-GDPO Training
### 1. Prepare GlyphCorrector dataset
Please first download our regional preference dataset [GlyphCorrector](https://huggingface.co/datasets/FudanCVL/GlyphCorrector). Then, place it under `dataset/GlyphCorrector`:
```bash
mkdir -p dataset
huggingface-cli download FudanCVL/GlyphCorrector GlyphCorrector.zip \
--repo-type dataset \
--local-dir dataset \
--local-dir-use-symlinks False
unzip -q dataset/GlyphCorrector.zip -d dataset
```
After extraction, verify the folder structure:
```text
dataset/GlyphCorrector/
β”œβ”€β”€ annotated_mask/
β”‚ β”œβ”€β”€ batch_0/
β”‚ β”‚ β”œβ”€β”€ generated_0_mask.jpg
β”‚ β”‚ └── ...
β”‚ └── batch_1/
└── inference_results/
β”œβ”€β”€ batch_0/
β”‚ β”œβ”€β”€ generated_0.png
β”‚ β”œβ”€β”€ glyph_0.png
β”‚ β”œβ”€β”€ mask_0.png
β”‚ β”œβ”€β”€ prompt.txt
β”‚ └── ...
└── batch_1/
```
### 2. Run R-GDPO training
Use the provided script for R-GDPO training:
```bash
bash dpo/train_dpo_group.bash
```
## βš™οΈ Default Model Settings
- Base FLUX model: `black-forest-labs/FLUX.1-dev`
- Stage1 Transformer path: `pretrained/pretrained_stage1_attn_mask_transformer-stage-1-2`
- Stage2 LoRA path: `pretrained/dpo-checkpoint`
---
## πŸ’— Citation
```bibtex
@article{shuai2026glyphprinter,
title={GlyphPrinter: Region-Grouped Direct Preference Optimization for Glyph-Accurate Visual Text Rendering},
author={Xincheng Shuai and Ziye Li and Henghui Ding and Dacheng Tao},
journal={CVPR},
year={2026}
}
```