
FlowAR: Scale-wise Autoregressive Image Generation Meets Flow Matching
Esser, P., Kulal, S., Blattmann, A., Entezari, R., M
¨
uller, J.,
Saini, H., Levi, Y., Lorenz, D., Sauer, A., Boesel, F., et al.
Scaling rectified flow transformers for high-resolution
image synthesis. In ICML, 2024.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y.
Generative adversarial nets. NeurIPS, 2014.
He, J., Yu, Q., Liu, Q., and Chen, L.-C. Flowtok: Flowing
seamlessly across text and image tokens. arXiv preprint
arXiv:2503.10772, 2025.
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and
Hochreiter, S. Gans trained by a two time-scale update
rule converge to a local nash equilibrium. NeurIPS, 30,
2017.
Ho, J., Jain, A., and Abbeel, P. Denoising diffusion proba-
bilistic models. NeurIPS, 2020.
Hoogeboom, E., Heek, J., and Salimans, T. simple diffusion:
End-to-end diffusion for high resolution images. In ICML,
2023.
Kang, M., Zhu, J.-Y., Zhang, R., Park, J., Shechtman, E.,
Paris, S., and Park, T. Scaling up gans for text-to-image
synthesis. In CVPR, 2023.
Kim, D., He, J., Yu, Q., Yang, C., Shen, X., Kwak, S., and
Chen, L.-C. Democratizing text-to-image masked gener-
ative models with compact text-aware one-dimensional
tokens. arXiv preprint arXiv:2501.07730, 2025.
Kingma, D. P. and Welling, M. Auto-encoding variational
bayes. In ICLR, 2014.
Lee, D., Kim, C., Kim, S., Cho, M., and Han, W.-S. Autore-
gressive image generation using residual quantization. In
CVPR, 2022.
Li, T., Tian, Y., Li, H., Deng, M., and He, K. Autoregressive
image generation without vector quantization. NeurIPS,
2024.
Lipman, Y., Chen, R. T., Ben-Hamu, H., Nickel, M., and
Le, M. Flow matching for generative modeling. arXiv
preprint arXiv:2210.02747, 2022.
Liu, Q., Zeng, Z., He, J., Yu, Q., Shen, X., and Chen, L.-
C. Alleviating distortion in image generation via multi-
resolution diffusion models. NeurIPS, 2024.
Liu, X., Gong, C., and Liu, Q. Flow straight and fast:
Learning to generate and transfer data with rectified flow.
arXiv preprint arXiv:2209.03003, 2022.
OpenAI. Introducing chatgpt.
https://openai.com/
blog/chatgpt/, 2022.
OpenAI. Gpt-4 technical report. arXiv preprint
arXiv:2303.08774, 2023. URL
https://arxiv.
org/abs/2303.08774.
Peebles, W. and Xie, S. Scalable diffusion models with
transformers. In ICCV, 2023.
Pope, R., Douglas, S., Chowdhery, A., Devlin, J., Bradbury,
J., Heek, J., Xiao, K., Agrawal, S., and Dean, J. Efficiently
scaling transformer inference. Proceedings of Machine
Learning and Systems, 5, 2023.
Powers, D. M. Evaluation: from precision, recall and f-
measure to roc, informedness, markedness and correla-
tion. arXiv preprint arXiv:2010.16061, 2020.
Radford, A., Narasimhan, K., Salimans, T.,
and Sutskever, I. Improving language under-
standing by generative pre-training.
https:
//cdn.openai.com/research-covers/
language-unsupervised/language_
understanding_paper.pdf, 2018.
Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G.,
Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark,
J., Krueger, G., and Sutskever, I. Learning transferable
visual models from natural language supervision. In
ICML, 2021.
Razavi, A., Van den Oord, A., and Vinyals, O. Generating
diverse high-fidelity images with vq-vae-2. NeurIPS,
2019.
Ren, S., Wang, Z., Zhu, H., Xiao, J., Yuille, A., and Xie, C.
Rejuvenating image-gpt as strong visual representation
learners. In ICML, 2024a.
Ren, S., Yu, Y., Ruiz, N., Wang, F., Yuille, A., and Xie,
C. M-var: Decoupled scale-wise autoregressive model-
ing for high-quality image generation. arXiv preprint
arXiv:2411.10433, 2024b.
Ren, S., Yu, Q., He, J., Shen, X., Yuille, A., and Chen, L.-C.
Beyond next-token: Next-x prediction for autoregres-
sive visual generation. arXiv preprint arXiv:2502.20388,
2025a.
Ren, S., Zhu, H., Wei, C., Li, Y., Yuille, A., and Xie, C.
Arvideo: Autoregressive pretraining for self-supervised
video representation learning. TMLR, 2025b.
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and
Ommer, B. High-resolution image synthesis with latent
diffusion models. In CVPR, 2022.
Ronneberger, O., Fischer, P., and Brox, T. U-net: Convolu-
tional networks for biomedical image segmentation. In
MICCAI, 2015.
10