wyf2020 commited on
Commit
19cd442
·
verified ·
1 Parent(s): bea4ed1

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +107 -0
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SelfCap Dataset
2
+
3
+ Long multi-view videos collected for the SIGGRAPH Asia 2024 (TOG) paper: [Representing Long Volumetric Video with Temporal Gaussian Hierarchy](https://zju3dv.github.io/longvolcap/).
4
+
5
+ ## Content
6
+
7
+ Camera parameter convensions follow [EasyVolcap](https://github.com/zju3dv/EasyVolcap). Some sequences contain an extra synchronization correction list (`time computed from frame index - sync.json = actual timestamp`).
8
+
9
+ We also provide a set of point clouds extracted from multiview images using various tools like [COLMAP](https://colmap.github.io) and [RealityCapture](https://www.capturingreality.com/), which were used as initialization for training the Temporal Gaussian Hierarchy model for the paper.
10
+
11
+ Note that the released dataset are compressed into videos to save bandwidth and space.
12
+ You can extract the images using tools like ffmpeg following scripts like [this](https://github.com/zju3dv/EasyVolcap/blob/main/scripts/preprocess/extract_videos.py).
13
+
14
+ If you encountered any problems when using the dataset, feel free to contact [Zhen Xu](https://zhenx.me).
15
+
16
+ - `bar`:
17
+ - 3540 frames at 60 FPS (~1 min)
18
+ - 2160p
19
+ - 18 cameras
20
+ - dense point clouds (every 1000 frames), sparse (every frame) point clouds
21
+ - no sync.json provided.
22
+ - `corgi`:
23
+ - 3500 frames at 60 FPS (~1 min)
24
+ - 2160p
25
+ - 24 cameras
26
+ - dense point clouds (every 1000 frames), sparse (every frame) point clouds
27
+ - extra synchronization correction provided in `optimized/sync.json`.
28
+ - `bike`:
29
+ - 37377 frames at 60 FPS (~10 min)
30
+ - 1024x1024
31
+ - 22 cameras
32
+ - same as `corgi` but with denser sparse point clouds.
33
+ - `hair`:
34
+ - 6500 frames at 60 FPS (~2 min)
35
+ - 2160p
36
+ - 24 cameras
37
+ - same as `corgi` but with denser sparse point clouds.
38
+ - `dance`:
39
+ - 8200 frames at 60 FPS (~2.5 min)
40
+ - 2160p
41
+ - 24 cameras
42
+ - same as `corgi` but with denser sparse point clouds.
43
+ - `yoga`:
44
+ - 10300 frames at 60 FPS (~3 min)
45
+ - 2160p
46
+ - 24 cameras
47
+ - same as `corgi` but with denser sparse point clouds.
48
+
49
+ For the LongVolcap paper we only performed qualitative analysis and want to achieve the best quality possible (mainly for the realtime rendering demo), thus no extra testing views are held out. We used 0.5x downsampled images for training to make the process faster. We used the videos as their full speed (60 fps) without subsampling. For bike, we used the 15000-21000th frames for the 6000-frame model, 15000-33000th frames for the 18000-frame model. For dance, hair and yoga, we used the 6000-12000th frames. For corgi, we used 5000-12000th frames. The bar model uses all existing frames.
50
+
51
+
52
+
53
+ For the FreeTimeGS paper, we summarize the quantitative evaluation protocol as shown in the table below. For scenes with a downsample ratio of 0.5, we first perform COLMAP undistortion with `blank_pixels=0`, and then downsample by a factor of 0.5 using INTER_AREA. For scenes with a downsample ratio of 1.0, we perform COLMAP undistortion with `blank_pixels=0` without downsampling.
54
+
55
+ | FreeTimeGS Scene | SelfCap Scene | Test View | Training Views | Frame Indices | Downsample Ratio |
56
+ | ---------------- | ---------------- | --------- | --------------------- | ------------- | ---------------- |
57
+ | dance1 | hair-release | 0015.mp4 | the rest of the views | [4120,4180) | 0.5 |
58
+ | dance2 | hair-release | 0015.mp4 | the rest of the views | [5530,5590) | 0.5 |
59
+ | corgi1 | corgi-release | 0007.mp4 | the rest of the views | [200,260) | 0.5 |
60
+ | corgi2 | corgi-release | 0007.mp4 | the rest of the views | [2950,3010) | 0.5 |
61
+ | bike1 | bike-release | 0009.mp4 | the rest of the views | [8900,8960) | 1.0 |
62
+ | bike2 | bike-release | 0009.mp4 | the rest of the views | [30020,30080) | 1.0 |
63
+ | dance3 | not released yet | | | | 0.5 |
64
+ | dance4 | not released yet | | | | 0.5 |
65
+
66
+
67
+
68
+ ## License
69
+
70
+ The ***SelfCap*** dataset is released under the non-commercial, research-only custom zju3dv license. Please contact [Prof. Xiaowei Zhou](https://xzhou.me) for any commercial usage inquiries.
71
+
72
+ ## Citation
73
+
74
+ ```bibtex
75
+ @Article{xu2024longvolcap,
76
+ author = {Xu, Zhen and Xu, Yinghao and Yu, Zhiyuan and Peng, Sida and Sun, Jiaming and Bao, Hujun and Zhou, Xiaowei},
77
+ title = {Representing Long Volumetric Video with Temporal Gaussian Hierarchy},
78
+ journal = {ACM Transactions on Graphics},
79
+ number = {6},
80
+ volume = {43},
81
+ month = {November},
82
+ year = {2024},
83
+ url = {https://zju3dv.github.io/longvolcap}
84
+ }
85
+
86
+ @Article{xu2023easyvolcap,
87
+ title = {EasyVolcap: Accelerating Neural Volumetric Video Research},
88
+ author = {Xu, Zhen and Xie, Tao and Peng, Sida and Lin, Haotong and Shuai, Qing and Yu, Zhiyuan and He, Guangzhao and Sun, Jiaming and Bao, Hujun and Zhou, Xiaowei},
89
+ booktitle = {SIGGRAPH Asia 2023 Technical Communications},
90
+ year = {2023}
91
+ }
92
+
93
+ @Inproceedings{xu20234k4d,
94
+ title = {4K4D: Real-Time 4D View Synthesis at 4K Resolution},
95
+ author = {Xu, Zhen and Peng, Sida and Lin, Haotong and He, Guangzhao and Sun, Jiaming and Shen, Yujun and Bao, Hujun and Zhou, Xiaowei},
96
+ booktitle = {CVPR},
97
+ year = {2024}
98
+ }
99
+
100
+ @Inproceedings{wang2025freetimegs,
101
+ author = {Wang, Yifan and Yang, Peishan and Xu, Zhen and Sun, Jiaming and Zhang, Zhanhua and Chen, Yong and Bao, Hujun and Peng, Sida and Zhou, Xiaowei},
102
+ title = {FreeTimeGS: Free Gaussian Primitives at Anytime Anywhere for Dynamic Scene Reconstruction},
103
+ booktitle = {CVPR},
104
+ year = {2025},
105
+ url = {https://zju3dv.github.io/freetimegs}
106
+ }
107
+ ```