INTOTHEMILD commited on
Commit
e2a3ee4
·
verified ·
1 Parent(s): 7d7a7d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +109 -106
README.md CHANGED
@@ -1,107 +1,110 @@
1
- ---
2
- pretty_name: GS2E
3
- paperswithcode_id: robust-e-nerf-synthetic-event-dataset
4
- license: cc-by-4.0
5
- viewer: false
6
- size_categories:
7
- - 1K<n<10K
8
-
9
- ---
10
-
11
- # 📦 GS2E: Gaussian Splatting is an Effective DataGenerator for Event Stream Generation
12
-
13
- > *Submission to NeurIPS 2025 D&B Track, Under Review.*
14
-
15
- <p align="center">
16
- <img src="./assets/teaser_00.png" alt="Teaser of GS2E" width="80%">
17
- </p>
18
-
19
-
20
- ## 🧾 Dataset Summary
21
-
22
- **GS2E** (Gaussian Splatting for Event stream Extraction) is a synthetic multi-view event dataset designed to support high-fidelity 3D scene understanding, novel view synthesis, and event-based neural rendering. Unlike previous video-driven or graphics-only event datasets, GS2E leverages 3D Gaussian Splatting (3DGS) to generate geometry-consistent photorealistic RGB frames from sparse camera poses, followed by physically-informed event simulation with adaptive contrast threshold modeling. The dataset enables scalable, controllable, and sensor-faithful generation of realistic event streams with aligned RGB and camera pose data.
23
-
24
- <p align="center">
25
- <img src="./assets/egm_diff.png" alt="Teaser of GS2E" width="80%">
26
- </p>
27
-
28
-
29
-
30
- ## 📚 Dataset Description
31
-
32
- Event cameras offer unique advantages—such as low latency, high temporal resolution, and high dynamic range—making them ideal for 3D reconstruction and SLAM under rapid motion and challenging lighting. However, the lack of large-scale, geometry-consistent event datasets has hindered the development of event-driven or hybrid RGB-event methods.
33
-
34
- GS2E addresses this gap by synthesizing event data from sparse, static RGB images. Using 3D Gaussian Splatting (3DGS), we reconstruct high-fidelity 3D scenes and generate dense camera trajectories to render blur-free and motion-blurred sequences. These sequences are then processed by a physically-grounded event simulator, incorporating adaptive contrast thresholds that vary across scenes and motion profiles.
35
-
36
- The dataset includes:
37
-
38
- **21 distinct scenes**, each with **3** corresponding event sequences under **varying blur levels** (slight, medium, and severe)
39
- <!-- * **21 multi-view event sequences** across **7 scenes** and **3 blur levels** (slight/medium/severe) -->
40
-
41
- * Per-frame photorealistic RGB renderings (clean and motion-blurred)
42
- * Ground truth camera poses
43
- * Geometry-consistent synthetic event streams
44
-
45
- The result is a simulation-friendly yet physically-informed dataset for training and evaluating event-based 3D reconstruction, localization, SLAM, and novel view synthesis.
46
-
47
-
48
- If you use this synthetic event dataset for your work, please cite:
49
-
50
- ```bibtex
51
- @misc{li2025gs2egaussiansplattingeffective,
52
- title={GS2E: Gaussian Splatting is an Effective Data Generator for Event Stream Generation},
53
- author={Yuchen Li and Chaoran Feng and Zhenyu Tang and Kaiyuan Deng and Wangbo Yu and Yonghong Tian and Li Yuan},
54
- year={2025},
55
- eprint={2505.15287},
56
- archivePrefix={arXiv},
57
- primaryClass={cs.CV},
58
- url={https://arxiv.org/abs/2505.15287},
59
- }
60
- ```
61
-
62
- ## Dataset Structure and Contents
63
-
64
- This synthetic event dataset is organized by scene, with each scene directory containing synchronized multimodal data for RGB-event processing tasks. The data was derived from MVImgNet and processed via GS2E to generate high-quality event streams. Each scene includes the following elements:
65
-
66
- | Path / File | Data Type | Description |
67
- | ---------------------- | -------------------------- | ---------------------------------------------------- |
68
- | `images/` | RGB image sequence | Sharp, high-resolution ground truth RGB frames |
69
- | `images_blur_<level>/` | Blurred RGB image sequence | Images with different degrees of artificial blur |
70
- | `sparse/` | COLMAP sparse model | Contains `cameras.bin`, `images.bin`, `points3D.bin` |
71
- | `events.h5` | Event data (HDF5) | Compressed event stream as (t, x, y, p) |
72
-
73
- - The `events.h5` file stores events in the format:
74
- `[timestamp (μs), x (px), y (px), polarity (1/0)]`
75
- - `images_blur_<level>/` folders indicate increasing blur intensity.
76
- - `sparse/` is generated by COLMAP and includes camera intrinsics and poses.
77
-
78
- This structure enables joint processing of visual and event data for various tasks such as event-based deblurring, video reconstruction, and hybrid SfM pipelines.
79
-
80
-
81
- <p align="center">
82
- <img src="./assets/pipeline.png" alt="Teaser of GS2E" width="80%">
83
- </p>
84
-
85
-
86
- ## Setup
87
-
88
- 1. Install [Git LFS](https://git-lfs.com/) according to the [official instructions](https://github.com/git-lfs/git-lfs?utm_source=gitlfs_site&utm_medium=installation_link&utm_campaign=gitlfs#installing).
89
-
90
- 2. Setup Git LFS for your user account with:
91
-
92
- ```bash
93
- git lfs install
94
- ```
95
-
96
- 3. Clone this dataset repository into the desired destination directory with:
97
-
98
- ```bash
99
- git lfs clone https://huggingface.co/datasets/Falcary/GS2E
100
- ```
101
-
102
- 4. To minimize disk usage, remove the `.git/` folder. However, this would complicate the pulling of changes in this upstream dataset repository.
103
-
104
- ---
105
-
106
- license: cc-by-4.0
 
 
 
107
  ---
 
1
+ ---
2
+ pretty_name: GS2E
3
+ paperswithcode_id: robust-e-nerf-synthetic-event-dataset
4
+ license: cc-by-4.0
5
+ viewer: false
6
+ size_categories:
7
+ - 1K<n<10K
8
+
9
+ ---
10
+
11
+ # 📦 GS2E: Gaussian Splatting is an Effective DataGenerator for Event Stream Generation
12
+
13
+ > *Submission to NeurIPS 2025 D&B Track, Under Review.*
14
+
15
+ <p align="center">
16
+ <img src="./assets/teaser_00.png" alt="Teaser of GS2E" width="80%">
17
+ </p>
18
+
19
+
20
+ ## 🧾 Dataset Summary
21
+
22
+ **GS2E** (Gaussian Splatting for Event stream Extraction) is a synthetic multi-view event dataset designed to support high-fidelity 3D scene understanding, novel view synthesis, and event-based neural rendering. Unlike previous video-driven or graphics-only event datasets, GS2E leverages 3D Gaussian Splatting (3DGS) to generate geometry-consistent photorealistic RGB frames from sparse camera poses, followed by physically-informed event simulation with adaptive contrast threshold modeling. The dataset enables scalable, controllable, and sensor-faithful generation of realistic event streams with aligned RGB and camera pose data.
23
+
24
+ <p align="center">
25
+ <img src="./assets/egm_diff.png" alt="Teaser of GS2E" width="80%">
26
+ </p>
27
+
28
+
29
+
30
+ ## 📚 Dataset Description
31
+
32
+ Event cameras offer unique advantages—such as low latency, high temporal resolution, and high dynamic range—making them ideal for 3D reconstruction and SLAM under rapid motion and challenging lighting. However, the lack of large-scale, geometry-consistent event datasets has hindered the development of event-driven or hybrid RGB-event methods.
33
+
34
+ GS2E addresses this gap by synthesizing event data from sparse, static RGB images. Using 3D Gaussian Splatting (3DGS), we reconstruct high-fidelity 3D scenes and generate dense camera trajectories to render blur-free and motion-blurred sequences. These sequences are then processed by a physically-grounded event simulator, incorporating adaptive contrast thresholds that vary across scenes and motion profiles.
35
+
36
+ The dataset includes:
37
+
38
+ **21 distinct scenes**, each with **3** corresponding event sequences under **varying blur levels** (slight, medium, and severe)
39
+ <!-- * **21 multi-view event sequences** across **7 scenes** and **3 blur levels** (slight/medium/severe) -->
40
+
41
+ * Per-frame photorealistic RGB renderings (clean and motion-blurred)
42
+ * Ground truth camera poses
43
+ * Geometry-consistent synthetic event streams
44
+
45
+ The result is a simulation-friendly yet physically-informed dataset for training and evaluating event-based 3D reconstruction, localization, SLAM, and novel view synthesis.
46
+
47
+
48
+ If you use this synthetic event dataset for your work, please cite:
49
+
50
+ ```bibtex
51
+ @misc{li2025gs2egaussiansplattingeffective,
52
+ title={GS2E: Gaussian Splatting is an Effective Data Generator for Event Stream Generation},
53
+ author={Yuchen Li and Chaoran Feng and Zhenyu Tang and Kaiyuan Deng and Wangbo Yu and Yonghong Tian and Li Yuan},
54
+ year={2025},
55
+ eprint={2505.15287},
56
+ archivePrefix={arXiv},
57
+ primaryClass={cs.CV},
58
+ url={https://arxiv.org/abs/2505.15287},
59
+ }
60
+ ```
61
+
62
+ ## Dataset Structure and Contents
63
+
64
+ This synthetic event dataset is organized by scene, with each scene directory containing synchronized multimodal data for RGB-event processing tasks. The data was derived from MVImgNet and processed via GS2E to generate high-quality event streams. Each scene includes the following elements:
65
+
66
+ | Path / File | Data Type | Description |
67
+ | ---------------------- | -------------------------- | ---------------------------------------------------- |
68
+ | `images/` | RGB image sequence | Sharp, high-resolution ground truth RGB frames |
69
+ | `images_blur_<level>/` | Blurred RGB image sequence | Images with different degrees of artificial blur |
70
+ | `sparse/` | COLMAP sparse model | Contains `cameras.bin`, `images.bin`, `points3D.bin` |
71
+ | `events.h5` | Event data (HDF5) | Compressed event stream as (t, x, y, p) |
72
+
73
+ - The `events.h5` file stores events in the format:
74
+ `[timestamp (μs), x (px), y (px), polarity (1/0)]`
75
+ - `images_blur_<level>/` folders indicate increasing blur intensity.
76
+ - `sparse/` is generated by COLMAP and includes camera intrinsics and poses.
77
+
78
+ This structure enables joint processing of visual and event data for various tasks such as event-based deblurring, video reconstruction, and hybrid SfM pipelines.
79
+
80
+
81
+ The synthetic event-stream data derived from DL3DV is stored under the `DL3DV-based` subdirectory. Because the original dataset contains a large number of images and supports multiple resolutions, and due to storage limitations, it is not practical to include the image files in this repository. The provided archives therefore contain only the event-stream data. The corresponding images and sparse source data can be downloaded from the [DL3DV official dataset](https://huggingface.co/datasets/DL3DV/DL3DV-Benchmark). Scene indices in this dataset follow the same ordering as in DL3DV.
82
+
83
+
84
+ <p align="center">
85
+ <img src="./assets/pipeline.png" alt="Teaser of GS2E" width="80%">
86
+ </p>
87
+
88
+
89
+ ## Setup
90
+
91
+ 1. Install [Git LFS](https://git-lfs.com/) according to the [official instructions](https://github.com/git-lfs/git-lfs?utm_source=gitlfs_site&utm_medium=installation_link&utm_campaign=gitlfs#installing).
92
+
93
+ 2. Setup Git LFS for your user account with:
94
+
95
+ ```bash
96
+ git lfs install
97
+ ```
98
+
99
+ 3. Clone this dataset repository into the desired destination directory with:
100
+
101
+ ```bash
102
+ git lfs clone https://huggingface.co/datasets/Falcary/GS2E
103
+ ```
104
+
105
+ 4. To minimize disk usage, remove the `.git/` folder. However, this would complicate the pulling of changes in this upstream dataset repository.
106
+
107
+ ---
108
+
109
+ license: cc-by-4.0
110
  ---