Solivition commited on
Commit
c8cb8d3
·
verified ·
1 Parent(s): e1acfb7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +169 -4
README.md CHANGED
@@ -1,16 +1,181 @@
1
  ---
2
  license: cc-by-nc-4.0
 
 
 
 
 
 
3
  ---
4
- # MUGSQA
 
5
 
6
  <a href="https://arxiv.org/abs/2511.06830"><img src="https://img.shields.io/badge/Paper-b5212f.svg?logo=arxiv" alt="arXiv" style="vertical-align: middle;"></a>
7
  <a href="https://github.com/Solivition/MUGSQA"><img src="https://img.shields.io/badge/GitHub-%23121011.svg?logo=github" alt="github" style="vertical-align: middle;"></a>
8
 
9
  Officially released MUGSQA dataset for the <strong>ICASSP 2026</strong> paper <strong>MUGSQA: Novel Multi-Uncertainty-Based Gaussian Splatting Quality Assessment Method, Dataset, and Benchmarks</strong>.
10
 
11
- We are currently uploading the dataset, and instructions for using it will be added soon.
12
 
13
- ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ```latex
15
  @inproceedings{chen2026mugsqa,
16
  title={{MUGSQA: Novel Multi-Uncertainty-Based Gaussian Splatting Quality Assessment Method, Dataset, and Benchmarks}},
@@ -18,4 +183,4 @@ We are currently uploading the dataset, and instructions for using it will be ad
18
  booktitle={Proc. ICASSP},
19
  year={2026}
20
  }
21
- ```
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ task_categories:
4
+ - other
5
+ tags:
6
+ - 3d
7
+ - gaussian-splatting
8
+ - quality-assessment
9
  ---
10
+
11
+ # MUGSQA (Multi-Uncertainty-Based Gaussian Splatting Quality Assessment) Dataset
12
 
13
  <a href="https://arxiv.org/abs/2511.06830"><img src="https://img.shields.io/badge/Paper-b5212f.svg?logo=arxiv" alt="arXiv" style="vertical-align: middle;"></a>
14
  <a href="https://github.com/Solivition/MUGSQA"><img src="https://img.shields.io/badge/GitHub-%23121011.svg?logo=github" alt="github" style="vertical-align: middle;"></a>
15
 
16
  Officially released MUGSQA dataset for the <strong>ICASSP 2026</strong> paper <strong>MUGSQA: Novel Multi-Uncertainty-Based Gaussian Splatting Quality Assessment Method, Dataset, and Benchmarks</strong>.
17
 
 
18
 
19
+ ## 📃Dataset Summary
20
+
21
+ **MUGSQA** is a large-scale dataset designed for **Gaussian Splatting Quality Assessment (GSQA)**. It is constructed by introducing multiple uncertainties during the reconstruction process and collecting large-scale subjective quality scores.
22
+
23
+ The dataset contains **2,414 reconstructed Gaussian models**, each paired with rendered videos and Mean Opinion Scores (MOS). It supports research on:
24
+
25
+ * Gaussian Splatting quality assessment
26
+ * reconstruction robustness evaluation
27
+ * rendering-based and rendering-free quality metrics
28
+
29
+ The dataset simulates several uncertainties that commonly occur during reconstruction, including:
30
+
31
+ * input view resolution
32
+ * number of input views
33
+ * view-to-object distance
34
+ * initialization of the point cloud
35
+ * method of GS reconstruction
36
+
37
+ These factors create diverse reconstruction distortions that are useful for benchmarking reconstruction methods and quality metrics.
38
+
39
+
40
+ ## 📁Dataset Structure
41
+
42
+ The dataset repository contains the following files:
43
+
44
+ ```
45
+ reference/
46
+ main.tar.gz
47
+ additional.tar.gz
48
+ mos_main.xlsx
49
+ mos_additional.xlsx
50
+ ```
51
+
52
+ ### 1. Reference Videos
53
+
54
+ ```
55
+ reference/
56
+ ```
57
+
58
+ This folder contains **ground-truth reference videos** rendered from the original source objects. Due to copyright restrictions, the original 3D mesh models are not included. Only the rendered reference videos are provided. These videos are used in the subjective quality assessment experiments as reference stimuli.
59
+
60
+ ### 2. Main Set
61
+ ```
62
+ main.tar.gz
63
+ ```
64
+
65
+ After extraction:
66
+
67
+ ```
68
+ main/
69
+ ├── sample_folder_1/
70
+ ├── sample_folder_2/
71
+ ...
72
+ └── sample_folder_1970/
73
+ ```
74
+
75
+ The **main set contains 1,970 reconstructed Gaussian objects**. Each sample folder represents one distorted reconstruction generated under specific uncertainty settings.
76
+
77
+ #### 2.1. Naming Convention
78
+
79
+ Each sample folder follows the format:
80
+
81
+ ```
82
+ modelname_resolution_views_distance_method_pointcloud
83
+ ```
84
+
85
+ Example:
86
+
87
+ ```
88
+ 12th-c-ce-water-moon-guanyin_480res_9views_5distance_lgs_rndpc
89
+ ```
90
+
91
+ Where:
92
+ | Field | Description |
93
+ | ---------- | ------------------------- |
94
+ | modelname | name of the source object |
95
+ | resolution | input view resolution |
96
+ | views | number of input views |
97
+ | distance | view-to-object distance |
98
+ | method | reconstruction method |
99
+ | pointcloud | initial point cloud type |
100
+
101
+ These parameters correspond to the reconstruction uncertainty settings used during dataset generation.
102
+
103
+ #### 2.2. Files inside each sample folder
104
+
105
+ Each distorted sample folder contains two files with the same name:
106
+
107
+ ```
108
+ sample_name.mp4
109
+ sample_name.ply
110
+ ```
111
+
112
+ | File | Description |
113
+ | ------ | --------------------------------------------------- |
114
+ | `.mp4` | rendered video of the reconstructed Gaussian object |
115
+ | `.ply` | reconstructed 3D Gaussian model |
116
+
117
+ ### 3. Additional Set
118
+
119
+ ```
120
+ additional.tar.gz
121
+ ```
122
+
123
+ After extraction:
124
+
125
+ ```
126
+ additional/
127
+ ├── sample_folder_1/
128
+ ...
129
+ └── sample_folder_444/
130
+ ```
131
+
132
+ The **additional set contains 444 reconstructed samples**. Unlike the main set, the additional set includes reconstructions generated using multiple Gaussian Splatting methods:
133
+
134
+ * 3DGS
135
+ * Mip-Splatting
136
+ * Scaffold-GS
137
+ * EAGLES
138
+ * Octree-GS
139
+
140
+ All other settings are consistent with the main set.
141
+
142
+ ### 4. MOS Annotations
143
+
144
+ The subjective quality scores are stored in:
145
+
146
+ ```
147
+ mos_main.xlsx
148
+ mos_additional.xlsx
149
+ ```
150
+
151
+ Each entry corresponds to one distorted sample and its **Mean Opinion Score (MOS)**. MOS values represent perceptual quality collected through a large-scale subjective study. Higher MOS indicates better perceived quality, and the MOS range is 0 to 5.
152
+
153
+
154
+ ## 🧰Usage Example
155
+
156
+ Example workflow:
157
+
158
+ 1. Extract the dataset archives:
159
+
160
+ ```
161
+ tar -xzf main.tar.gz
162
+ tar -xzf additional.tar.gz
163
+ ```
164
+
165
+ 2. Load reconstructed models:
166
+
167
+ ```
168
+ sample_folder/
169
+ ├── sample_name.mp4
170
+ └── sample_name.ply
171
+ ```
172
+
173
+ 3. Use the `.ply` files for rendering-free quality assessment or use the `.mp4` files for rendering-based quality assessment.
174
+
175
+ 4. Match the sample name with the corresponding MOS score in the Excel files.
176
+
177
+
178
+ ## 🔗Citation
179
  ```latex
180
  @inproceedings{chen2026mugsqa,
181
  title={{MUGSQA: Novel Multi-Uncertainty-Based Gaussian Splatting Quality Assessment Method, Dataset, and Benchmarks}},
 
183
  booktitle={Proc. ICASSP},
184
  year={2026}
185
  }
186
+ ```