ODTSR / README.md
double8fun's picture
Remove library name (#2)
ae9c16d verified
---
license: apache-2.0
pipeline_tag: image-to-image
---
# ODTSR: One-Step Diffusion Transformer for Controllable Real-World Image Super-Resolution
This repository contains the official implementation of **ODTSR**, a model presented in the paper:
[**One-Step Diffusion Transformer for Controllable Real-World Image Super-Resolution**](https://huggingface.co/papers/2511.17138)
**Authors**: Yushun Fang, Yuxiang Chen, Shibo Yin, Qiang Hu, Jiangchao Yao, Ya Zhang, Xiaoyun Zhang, Yanfeng Wang
**Affiliations**: Shanghai Jiao Tong University, Xiaohongshu Inc
**Code**: [https://github.com/RedMediaTech/ODTSR](https://github.com/RedMediaTech/ODTSR)
<div align="center">
<img src="https://github.com/RedMediaTech/ODTSR/raw/main/static/1.png" alt="ODTSR Overview Framework" width="80%">
</div>
## Overview
Recent advances in diffusion-based real-world image super-resolution (Real-ISR) have demonstrated remarkable perceptual quality, yet the balance between fidelity and controllability remains a problem: multi-step diffusion-based methods suffer from generative diversity and randomness, resulting in low fidelity, while one-step methods lose control flexibility due to fidelity-specific finetuning.
**ODTSR** addresses this by presenting a one-step diffusion transformer based on Qwen-Image that performs Real-ISR considering fidelity and controllability simultaneously. It introduces a newly designed **Noise-hybrid Visual Stream (NVS)** that receives low-quality images with adjustable noise (Control Noise) and consistent noise (Prior Noise). Furthermore, **Fidelity-aware Adversarial Training (FAA)** is employed to enhance controllability and achieve one-step inference. ODTSR not only achieves state-of-the-art (SOTA) performance on generic Real-ISR, but also enables prompt controllability on challenging scenarios such as real-world scene text image super-resolution (STISR) of Chinese characters without training on specific datasets.
## Key Features
* **One-Step Super-Resolution**: Based on Qwen-Image, ODTSR trains a single-step SR model using LoRA, with model parameters reaching 20B.
* **Controllability**: With our proposed Noise-hybrid Visual Stream and Fidelity-aware Adversarial Training, the SR process can be jointly controlled by prompts as well as a Fidelity Weight $f$.
* **Multilingual Support**: English and Chinese prompts are supported.
* **Versatile Performance**: The model demonstrates strong performance in text images, fine-grained textures, and face images.
## Visual Results
### Results with fixed prompts & high fidelity
<div align="center">
<img src="https://github.com/RedMediaTech/ODTSR/raw/main/static/4.jpeg" alt="Results with fixed prompts & high fidelity" width="80%">
</div>
Under the high-fidelity setting with a fixed prompt, our model produces restorations that adhere more closely to the LQ input while remaining natural, significantly reducing the sense of AI processing.
### Text Real-ISR Results
<div align="center">
<img src="https://github.com/RedMediaTech/ODTSR/raw/main/static/2.jpeg" alt="Text Real-ISR Results" width="80%">
</div>
In text scenarios, when the prompt specifies the text to be restored, the model automatically matches the LQ text and performs the restoration.
### Controllable Real-ISR Results
<div align="center">
<img src="https://github.com/RedMediaTech/ODTSR/raw/main/static/3.jpeg" alt="Controllable Real-ISR Results" width="80%">
</div>
Qualitative results of controllable SR with prompt and adjustable Fidelity Weight (denoted as $f$) on Div2k-val dataset. As $f$ decreases from 1 to 0, detail generation and prompt adherence gradually strengthen.
## Dependencies and Installation
1. Prepare conda env:
```bash
conda create -n yourenv python=3.11
```
2. Install `pytorch` (we recommend `torch==2.6.0`):
```bash
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 -f https://mirrors.aliyun.com/pytorch-wheels/cu124/
```
3. Install this repo (based on [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio/tree/main)). The required packages will be automatically installed:
```bash
cd xxxx/ODTSR # Replace xxxx with your path
pip3 install -e . -v -i https://mirrors.cloud.tencent.com/pypi/simple
```
4. (For training) Install `basicsr`:
```bash
pip install basicsr
```
Note:
You can apply the the following command to fix a bug in `basicsr`. Make sure to replace `/opt/conda` with the path to your own conda environment:
```bash
sed -i '8s/from torchvision.transforms.functional_tensor import rgb_to_grayscale/from torchvision.transforms._functional_tensor import rgb_to_grayscale/' /opt/conda/lib/python3.11/site-packages/basicsr/data/degradations.py
```
5. Download base model to your disk: [Qwen-Image](https://huggingface.co/Qwen/Qwen-Image/tree/main)
6. (For training) Download base model to your disk: [Wan2.1-T2V-1.3B](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B/tree/main)
7. (For inference) Download the trained ODTSR model weight: [huggingface](https://huggingface.co/double8fun/ODTSR/tree/main)
## Inference with Script
Note: you need at least 40GB GPU memory to infer. We will support CPU offload to reduce GPU memory usage soon.
We now supports tile-based processing (tile size: 512×512), enabling input of arbitrary resolutions and SR at any scale factor.
Please replace `experiments/qwen_one_step_gan/${EXP_DATE}/checkpoints/net_gen_iter_10001.pth` with the trained ODTSR model weight.
```bash
sh examples/qwen_image/test_gan.sh
```
<div align="center">
<img src="https://github.com/RedMediaTech/ODTSR/raw/main/static/infer.png" alt="Inference Workflow" width="70%">
</div>
## Inference with Gradio
```bash
sh examples/qwen_image/test_gradio.sh
```
<img src="https://github.com/RedMediaTech/ODTSR/raw/main/static/gradio.jpeg" alt="Gradio Demo" >
## License
This project is released under the [Apache 2.0 license](https://github.com/RedMediaTech/ODTSR/blob/main/LICENSE).
## Acknowledgement
This project is based on [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio/tree/main).
We also leveraged some of [PiSA-SR](https://github.com/csslc/PiSA-SR/tree/main)'s code in dataloader part.
Thanks for the awesome work!
## Citation
If ODTSR is helpful to you, please consider citing our paper:
```bibtex
@article{fang2025onestep,
title={One-Step Diffusion Transformer for Controllable Real-World Image Super-Resolution},
author={Fang, Yushun and Chen, Yuxiang and Yin, Shibo and Hu, Qiang and Yao, Jiangchao and Zhang, Ya and Zhang, Xiaoyun and Wang, Yanfeng},
journal={arXiv preprint arXiv:2511.17138},
year={2025}
}
```