GlyphPrinter: Region-Grouped Direct Preference Optimization for Glyph-Accurate Visual Text Rendering

                   

Xincheng Shuai1,*, Ziye Li1,*, Henghui Ding1,✉, Dacheng Tao2

* Equal Contribution, ✉ Corresponding Author

1Fudan University, 2Nanyang Technological University

## 🔥🔥🔥 News - [2026/03/15] Release the **training code** and **GlyphCorrector dataset**. 🤗 [GlyphCorrector](https://huggingface.co/datasets/FudanCVL/GlyphCorrector). - [2026/03/13] Release the **inference code** and **model weights**. 🤗 [Model Weight](https://huggingface.co/FudanCVL/GlyphPrinter). - [2026/02/21] GlyphPrinter is accepted to **CVPR 2026**. 👏👏 --- ## 😊 Introduction ![teaser](assets/teaser.png) **GlyphPrinter** is a preference-based text rendering framework designed to eliminate the reliance on explicit reward models for visual text generation. It addresses the common failure cases in existing T2I models, such as stroke distortions and incorrect glyphs, especially when rendering complex Chinese characters, multilingual text, or out-of-domain symbols. --- ## 🔧 Key Features - **GlyphCorrector Dataset:** A specialized dataset with region-level glyph preference annotations, facilitating the model's ability to learn localized glyph correctness. - **R-GDPO (Region-Grouped Direct Preference Optimization):** Unlike standard DPO which models global image-level preferences, R-GDPO focuses on local regions where glyph errors typically occur. It optimizes inter- and intra-sample preferences over annotated regions to significantly enhance glyph accuracy. - **Regional Reward Guidance (RRG):** A novel inference strategy that samples from an optimal distribution with controllable glyph accuracy. --- ## 👷 Pipeline ![pipeline](assets/pipeline.png) The training of GlyphPrinter consists of two stages: 1. **Stage 1 (Fine-Tuning):** The model is first fine-tuned on multilingual synthetic and realistic text images to establish a strong baseline for text rendering. 2. **Stage 2 (Region-Level Preference Optimization):** The model is optimized using the R-GDPO objective on the GlyphCorrector dataset. This stage aligns model outputs with accurate glyph regions while discouraging incorrect ones, resulting in superior glyph fidelity. --- ## 💻 Quick Start ### Configuration 1. Environment setup ```bash cd GlyphPrinter conda create -n GlyphPrinter python=3.11.10 -y conda activate GlyphPrinter ``` 2. Requirements installation ```bash pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124 pip install --upgrade -r requirements.txt ``` 3. Inference ```bash python app.py ``` Default server port: `7897`. 4. CLI inference without Gradio (directly load conditions from `saved_conditions` directory, you can manually construct the npz-format condition through app.py) ```bash # list available saved conditions python3 inference.py --list-conditions # run inference using the latest condition in saved_conditions/ python3 inference.py \ --prompt "The colorful graffiti font printed on the street wall" \ --save-mask # run inference using a specific condition file python3 inference.py \ --condition condition_1.npz \ --output-dir outputs_inference ``` ## 🏃 R-GDPO Training ### 1. Prepare GlyphCorrector dataset Please first download our regional preference dataset [GlyphCorrector](https://huggingface.co/datasets/FudanCVL/GlyphCorrector). Then, place it under `dataset/GlyphCorrector`: ```bash mkdir -p dataset huggingface-cli download FudanCVL/GlyphCorrector GlyphCorrector.zip \ --repo-type dataset \ --local-dir dataset \ --local-dir-use-symlinks False unzip -q dataset/GlyphCorrector.zip -d dataset ``` After extraction, verify the folder structure: ```text dataset/GlyphCorrector/ ├── annotated_mask/ │ ├── batch_0/ │ │ ├── generated_0_mask.jpg │ │ └── ... │ └── batch_1/ └── inference_results/ ├── batch_0/ │ ├── generated_0.png │ ├── glyph_0.png │ ├── mask_0.png │ ├── prompt.txt │ └── ... └── batch_1/ ``` ### 2. Run R-GDPO training Use the provided script for R-GDPO training: ```bash bash dpo/train_dpo_group.bash ``` ## ⚙️ Default Model Settings - Base FLUX model: `black-forest-labs/FLUX.1-dev` - Stage1 Transformer path: `pretrained/pretrained_stage1_attn_mask_transformer-stage-1-2` - Stage2 LoRA path: `pretrained/dpo-checkpoint` --- ## 💗 Citation ```bibtex @article{shuai2026glyphprinter, title={GlyphPrinter: Region-Grouped Direct Preference Optimization for Glyph-Accurate Visual Text Rendering}, author={Xincheng Shuai and Ziye Li and Henghui Ding and Dacheng Tao}, journal={CVPR}, year={2026} } ```