metadata
task_categories:
- other
tags:
- audio-to-image
- landscape-generation
- geo-contextual
SounDiT: Geo-Contextual Soundscape-to-Landscape Generation
Project Page | Paper | GitHub
This repository contains the datasets for SounDiT, a model designed for Geo-contextual Soundscape-to-Landscape (GeoS2L) generation. The task focuses on synthesizing geographically realistic landscape images from environmental soundscapes.
Dataset Description
To support the GeoS2L task, the authors constructed two large-scale geo-contextual multi-modal datasets:
- SoundingSVI: Pairs diverse environmental soundscapes with real-world landscape images (Street View Imagery).
- SonicUrban: A dataset focusing on urban environmental soundscapes and their corresponding landscape imagery.
These datasets provide the necessary pairings between environmental audio and visual geo-contextual scenes to enable the training of models that understand the relationship between soundscapes and landscapes.
Citation
If you use these datasets in your research, please cite the following paper:
@misc{wang2025sounditgeocontextualsoundscapetolandscapegeneration,
title={SounDiT: Geo-Contextual Soundscape-to-Landscape Generation},
author={Junbo Wang and Haofeng Tan and Bowen Liao and Albert Jiang and Teng Fei and Qixing Huang and Zhengzhong Tu and Shan Ye and Yuhao Kang},
year={2025},
eprint={2505.12734},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2505.12734}
}