--- pipeline_tag: other library_name: diffusers --- # SounDiT: Geo-Contextual Soundscape-to-Landscape Generation SounDiT is a diffusion transformer (DiT)-based model designed for the **Geo-contextual Soundscape-to-Landscape (GeoS2L)** generation task. It synthesizes geographically realistic landscape images from environmental soundscapes by incorporating geo-contextual scene conditioning. - **Paper:** [SounDiT: Geo-Contextual Soundscape-to-Landscape Generation](https://huggingface.co/papers/2505.12734) - **Project Page:** [https://gisense.github.io/SounDiT-Page/](https://gisense.github.io/SounDiT-Page/) - **Repository:** [https://github.com/GISense/SounDiT](https://github.com/GISense/SounDiT) ## Overview Recent audio-to-image models often struggle to reconstruct real-world landscapes from environmental soundscapes. SounDiT addresses this gap using a DiT architecture that leverages diverse environmental soundscapes and scene conditioning to ensure geographical coherence. To evaluate this task, the authors introduced the Place Similarity Score (PSS) framework, which captures multi-level generation consistency across element, scene, and human perception. ## Code Usage ### Environment Setup ```bash conda env create -f environment.yml conda activate SounDiT ``` ### Inference ```bash bash ./scripts/inference.sh ``` ## Citation If you use SounDiT in your research, please cite the following paper: ```bibtex @misc{wang2025sounditgeocontextualsoundscapetolandscapegeneration, title={SounDiT: Geo-Contextual Soundscape-to-Landscape Generation}, author={Junbo Wang and Haofeng Tan and Bowen Liao and Albert Jiang and Teng Fei and Qixing Huang and Zhengzhong Tu and Shan Ye and Yuhao Kang}, year={2025}, eprint={2505.12734}, archivePrefix={arXiv}, primaryClass={cs.SD}, url={https://arxiv.org/abs/2505.12734} } ```