susanliang commited on
Commit
7834fc4
·
1 Parent(s): 00488ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -3
README.md CHANGED
@@ -1,3 +1,60 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+
3
+ <h2>AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene Synthesis</h2>
4
+
5
+ _**[Susan Liang](https://liangsusan-git.github.io/), [Chao Huang](https://wikichao.github.io/), [Yapeng Tian](https://www.yapengtian.com/), [Anurag Kumar](https://anuragkr90.github.io/), [Chenliang Xu](https://www.cs.rochester.edu/~cxu22/)**_
6
+
7
+ </div>
8
+
9
+ ### RWAVS Dataset
10
+ We provide the Real-World Audio-Visual Scene (RWAVS) Dataset.
11
+
12
+ 1. The dataset can be downloaded from this Hugging Face repository.
13
+
14
+ 2. The data is organized with the following directory structure.
15
+ ```
16
+ ./release/
17
+ ├── 1
18
+ │   ├── binaural_syn_re.wav
19
+ │   ├── feats_train.pkl
20
+ │   ├── feats_val.pkl
21
+ │   ├── frames
22
+ │ │ ├── 00001.png
23
+ | | ├── ...
24
+ │ │ ├── 00616.png
25
+ │   ├── source_syn_re.wav
26
+ │   ├── transforms_scale_train.json
27
+ │   ├── transforms_scale_val.json
28
+ │   ├── transforms_train.json
29
+ │   └── transforms_val.json
30
+ ├── ...
31
+ ├── 13
32
+ └── position.json
33
+ ```
34
+
35
+ The dataset contains 13 scenes indexed from 1 to 13. For each scene, we provide
36
+ * `transforms_train.json`: camera poses for training.
37
+ * `transforms_val.json`: camera poses for evaluation. We split the data into `train` and `val` subsets with 80% data for training and the rest for evaluation.
38
+ * `transforms_scale_train.json`: normalized camera poses for training. We scale 3D coordindates to $[-1, 1]^3$.
39
+ * `transforms_scale_val.json`: normalized camera poses for evaluation.
40
+ * `frames`: corresponding video frames for each camera pose.
41
+ * `source_syn_re.wav`: single-channel audio emitted by the sound source.
42
+ * `binaural_syn_re.wav`: two-channel audio captured by the binaural microphone. We synchronize `source_syn_re.wav` and `binaural_syn_re.wav` and resample them to $22050$ Hz.
43
+ * `feats_train.pkl`: extracted vision and depth features at each camera pose for training. We rely on V-NeRF to synthesize vision and depth images for each camera pose. We then use a pre-trained encoder to extract features from rendered images.
44
+ * `feats_val.pkl`: extracted vision and depth features at each camera pose for inference.
45
+ * `position.json`: normalized 3D coordinates of the sound source.
46
+
47
+ Please note that some frames may not have corresponding camera poses because COLMAP fails to estimate the camera parameters of these frames.
48
+
49
+ ### Citation
50
+ ```bib
51
+ @inproceedings{liang23avnerf,
52
+ author = {Liang, Susan and Huang, Chao and Tian, Yapeng and Kumar, Anurag and Xu, Chenliang},
53
+ booktitle = {Conference on Neural Information Processing Systems (NeurIPS)},
54
+ title = {AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene Synthesis},
55
+ year = {2023}
56
+ }
57
+ ```
58
+
59
+ ### Contact
60
+ If you have any comments or questions, feel free to contact [Susan Liang](mailto:sliang22@ur.rochester.edu) and [Chao Huang](mailto:chuang65@ur.rochester.edu).