
such as depth or normals may require a lightweight prior to
produce an initial image representation.
Future work. ϕ-PD is orthogonal to existing condition-
ing or adapter methods and can be integrated with them for
enhanced control. Future work includes extending ϕ-PD to
tasks such as deblurring, relighting, super-resolution, and
general image restoration.
References
[1] Anurag Ajay, Yilun Du, Abhi Gupta, Joshua Tenenbaum,
Tommi Jaakkola, and Pulkit Agrawal. Is conditional gen-
erative modeling all you need for decision-making? arXiv
preprint arXiv:2211.15657, 2022. 2
[2] Seungyeon Baek, Erqun Dong, Shadan Namazifard, Mark J
Matthews, and Kwang Moo Yi. Sonic: Spectral optimiza-
tion of noise for inpainting with consistency. arXiv preprint
arXiv:2511.19985, 2025. 2
[3] Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Ji-
aming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala,
Timo Aila, Samuli Laine, et al. eDiff-I: Text-to-image dif-
fusion models with an ensemble of expert denoisers. arXiv
preprint arXiv:2211.01324, 2022. 2
[4] Haoxin Chen, Menghan Xia, Yingqing He, Yong Zhang,
Xiaodong Cun, Shaoshu Yang, Jinbo Xing, Yaofang Liu,
Qifeng Chen, Xintao Wang, et al. VideoCrafter1: Open
diffusion models for high-quality video generation. arXiv
preprint arXiv:2310.19512, 2023. 2
[5] Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric
Cousineau, Benjamin Burchfiel, and Shuran Song. Diffu-
sion policy: Visuomotor policy learning via action diffusion.
arXiv preprint arXiv:2303.04137, 2023. 2
[6] Guillaume Couairon, Marl`
ene Careil, Matthieu Cord,
St´
ephane Lathuili`
ere, and Jakob Verbeek. Zero-shot spatial
layout conditioning for text-to-image diffusion models. In
Proceedings of the IEEE/CVF International Conference on
Computer Vision (ICCV), pages 2174–2183, October 2023.
3
[7] Sander Dieleman. Diffusion is spectral autoregression.
urlhttps://sander.ai/2024/09/02/spectral-autoregression.html,
September 2024. Accessed: 7 Dec 2025. 2
[8] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio
Lopez, and Vladlen Koltun. CARLA: An open urban driving
simulator, 2017. 6
[9] Xiang Gao, Shuai Yang, and Jiaying Liu. PTDiffusion: Free
lunch for generating optical illusion hidden pictures with
phase-transferred diffusion model. In Proceedings of the
Computer Vision and Pattern Recognition Conference, pages
18240–18249, 2025. 2
[10] Joseph W. Goodman. Statistical Optics. Wiley, 2 edition,
2015. Section 2.9.3. 4
[11] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffu-
sion probabilistic models. Advances in Neural Information
Processing Systems, 33:6840–6851, 2020. 2,5
[12] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William
Chan, Mohammad Norouzi, and David J Fleet. Video dif-
fusion models. Advances in Neural Information Processing
Systems, 35:8633–8646, 2022. 2
[13] Huang et al. NanoControl: A lightweight framework for
precise and efficient control in diffusion transformer. arXiv
preprint arXiv:2508.10424, 2024. 3
[14] Michael Janner, Yilun Du, Joshua B Tenenbaum, and Sergey
Levine. Planning with diffusion for flexible behavior synthe-
sis. arXiv preprint arXiv:2205.09991, 2022. 2
[15] Zeyinzi Jiang, Chaojie Mao, Yulin Pan, Zhen Han, and
Jingfeng Zhang. SCEdit: Efficient and controllable image
diffusion generation via skip connection editing. In Proceed-
ings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), pages 8995–9004, June 2024.
3
[16] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine.
Elucidating the design space of diffusion-based generative
models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave,
and Kyunghyun Cho, editors, Advances in Neural Informa-
tion Processing Systems, 2022. 2
[17] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and
Bryan Catanzaro. DiffWave: A versatile diffusion model for
audio synthesis. arXiv preprint arXiv:2009.09761, 2020. 2
[18] Black Forest Labs. Flux. https: //github .com/
black-forest-labs/flux, 2024. 11
[19] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow
straight and fast: Learning to generate and transfer data with
rectified flow. arXiv preprint arXiv:2209.03003, 2022. 4
[20] Sicheng Mo, Fangzhou Mu, Kuan Heng Lin, Yanli Liu,
Bochen Guan, Yin Li, and Bolei Zhou. FreeControl:
Training-free spatial control of any text-to-image diffusion
model with any condition. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition
(CVPR), pages 7465–7475, June 2024. 3
[21] Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian
Zhang, Zhongang Qi, and Ying Shan. T2I-Adapter: Learning
adapters to dig out more controllable ability for text-to-image
diffusion models. In AAAI, volume 38, pages 4296–4304,
2024. 2
[22] NVIDIA. Cosmos-Transfer1: Conditional world generation
with adaptive multimodal control. ArXiv, abs/2503.14492,
2025. 3,8
[23] Alan V. Oppenheim and Jae S. Lim. The importance of phase
in signals. Proceedings of the IEEE, 69(5):529–541, 1981.
2,3
[24] Bohao Peng, Jian Wang, Yuechen Zhang, Wenbo Li, Ming-
Chang Yang, and Jiaya Jia. ControlNeXt: Powerful and effi-
cient control for image and video generation. arXiv preprint
arXiv:2408.06070, 2024. 3
[25] Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnima
Sadekova, and Mikhail Kudinov. Grad-TTS: A diffusion
probabilistic model for text-to-speech. In International Con-
ference on Machine Learning, pages 8599–8608. PMLR,
2021. 2
[26] Michael Psenka, Alejandro Escontrela, Pieter Abbeel, and
Yi Ma. Learning a diffusion model policy from rewards via
q-score matching. arXiv preprint arXiv:2312.11752, 2023.
2
[27] Yurui Qian, Qi Cai, Yingwei Pan, Yehao Li, Ting Yao, Qibin
Sun, and Tao Mei. Boosting diffusion models with moving