NVG / README.md
yikaiwang's picture
Create README.md
3259d1a verified
metadata
extra_gated_fields:
  Name: text
  Institute: text
  Institutional Email: text
  I agree to use this model for non-commercial use ONLY: checkbox

Model Card for NVG series

Next Visual Granularity Generation

Yikai Wang, Zhouxia Wang, Zhonghua Wu, Qingyi Tao, Kang Liao, Chen Change Loy.
S-Lab, Nanyang Technological University; SenseTime Research

arXiv page

Model Details

Model Description

We propose a novel approach to image generation by decomposing an image into a structured sequence, where each element in the sequence shares the same spatial resolution but differs in the number of unique tokens used, capturing different level of visual granularity.
Image generation is carried out through our newly introduced Next Visual Granularity (NVG) generation framework, which generates a visual granularity sequence beginning from an empty image and progressively refines it, from global layout to fine details, in a structured manner. This iterative process encodes a hierarchical, layered representation that offers fine-grained control over the generation process across multiple granularity levels.
We train a series of NVG models for class-conditional image generation on the ImageNet dataset and observe clear scaling behavior. Compared to the VAR series, NVG consistently outperforms it in terms of FID scores (3.30 → 3.03, 2.57 → 2.44, 2.09 → 2.06). We also conduct extensive analysis to showcase the capability and potential of the NVG framework. Our code and models will be released.

  • License: S-Lab License 1.0

Model Sources

Uses

Illustrated in the github repo.

Citation

BibTeX:

@article{wang2025next,
  title={Next Visual Granularity Generation},
  author={Wang, Yikai and Wang, Zhouxia and Wu, Zhonghua and Tao, Qingyi and Liao, Kang and Loy, Chen Change},
  journal={arXiv preprint arXiv:2508.12811},
  year={2025}
}