| <div align="center"> | |
| <h2>AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene Synthesis</h2> | |
| _**[Susan Liang](https://liangsusan-git.github.io/), [Chao Huang](https://wikichao.github.io/), [Yapeng Tian](https://www.yapengtian.com/), [Anurag Kumar](https://anuragkr90.github.io/), [Chenliang Xu](https://www.cs.rochester.edu/~cxu22/)**_ | |
| </div> | |
| ### RWAVS Dataset | |
| We provide the Real-World Audio-Visual Scene (RWAVS) Dataset. | |
| 1. The dataset can be downloaded from this Hugging Face repository. | |
| 2. After you download the dataset, you can decompress the `RWAVS_Release.zip`. | |
| ``` | |
| unzip RWAVS_Release.zip | |
| cd release/ | |
| ``` | |
| 3. The data is organized with the following directory structure. | |
| ``` | |
| ./release/ | |
| ├── 1 | |
| │ ├── binaural_syn_re.wav | |
| │ ├── feats_train.pkl | |
| │ ├── feats_val.pkl | |
| │ ├── frames | |
| │ │ ├── 00001.png | |
| | | ├── ... | |
| │ │ ├── 00616.png | |
| │ ├── source_syn_re.wav | |
| │ ├── transforms_scale_train.json | |
| │ ├── transforms_scale_val.json | |
| │ ├── transforms_train.json | |
| │ └── transforms_val.json | |
| ├── ... | |
| ├── 13 | |
| └── position.json | |
| ``` | |
| The dataset contains 13 scenes indexed from 1 to 13. For each scene, we provide | |
| * `transforms_train.json`: camera poses for training. | |
| * `transforms_val.json`: camera poses for evaluation. We split the data into `train` and `val` subsets with 80% data for training and the rest for evaluation. | |
| * `transforms_scale_train.json`: normalized camera poses for training. We scale 3D coordindates to $[-1, 1]^3$. | |
| * `transforms_scale_val.json`: normalized camera poses for evaluation. | |
| * `frames`: corresponding video frames for each camera pose. | |
| * `source_syn_re.wav`: single-channel audio emitted by the sound source. | |
| * `binaural_syn_re.wav`: two-channel audio captured by the binaural microphone. We synchronize `source_syn_re.wav` and `binaural_syn_re.wav` and resample them to $22050$ Hz. | |
| * `feats_train.pkl`: extracted vision and depth features at each camera pose for training. We rely on V-NeRF to synthesize vision and depth images for each camera pose. We then use a pre-trained encoder to extract features from rendered images. | |
| * `feats_val.pkl`: extracted vision and depth features at each camera pose for inference. | |
| * `position.json`: normalized 3D coordinates of the sound source. | |
| Please note that some frames may not have corresponding camera poses because COLMAP fails to estimate the camera parameters of these frames. | |
| ### Citation | |
| ```bib | |
| @inproceedings{liang23avnerf, | |
| author = {Liang, Susan and Huang, Chao and Tian, Yapeng and Kumar, Anurag and Xu, Chenliang}, | |
| booktitle = {Conference on Neural Information Processing Systems (NeurIPS)}, | |
| title = {AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene Synthesis}, | |
| year = {2023} | |
| } | |
| ``` | |
| ### Contact | |
| If you have any comments or questions, feel free to contact [Susan Liang](mailto:sliang22@ur.rochester.edu) and [Chao Huang](mailto:chuang65@ur.rochester.edu). |