Datasets:
Tasks:
Image Segmentation
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
novel view synthesis
dynamic scene novel view segmentation
3d segmentation
neural radiance fields
gaussian splatting
License:
Add task category and links to paper, code, and project page
Browse filesThis PR improves the dataset card by:
- Adding the `image-segmentation` task category to the metadata.
- Updating the title references to "TRASE: Tracking-free 4D Segmentation and Editing" (the paper was previously titled "SADG").
- Adding links to the [research paper](https://huggingface.co/papers/2411.19290), [project page](https://yunjinli.github.io/project-sadg/), and [GitHub repository](https://github.com/yunjinli/SADG-SegmentAnyDynamicGaussian).
- Maintaining all existing license and configuration information.
README.md
CHANGED
|
@@ -1,7 +1,9 @@
|
|
| 1 |
---
|
| 2 |
-
license: cc-by-nc-4.0
|
| 3 |
language:
|
| 4 |
- en
|
|
|
|
|
|
|
|
|
|
| 5 |
tags:
|
| 6 |
- novel view synthesis
|
| 7 |
- dynamic scene novel view segmentation
|
|
@@ -15,55 +17,52 @@ datasets:
|
|
| 15 |
- google-immersive
|
| 16 |
- technicolor-light-field
|
| 17 |
configs:
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
- config_name: default
|
| 49 |
-
data_files:
|
| 50 |
-
- split: test
|
| 51 |
-
path:
|
| 52 |
-
- "Mask-Benchmark/*/*/gt_masks/*.png"
|
| 53 |
---
|
| 54 |
|
| 55 |
# Mask-Benchmark Dataset
|
| 56 |
|
| 57 |
-
|
|
|
|
|
|
|
| 58 |
|
| 59 |
## Overview
|
| 60 |
|
| 61 |
The Mask-Benchmark dataset provides ground truth segmentation masks for multiple dynamic scene datasets, including:
|
| 62 |
-
- HyperNeRF (A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields, ACM Transactions on Graphics (TOG))
|
| 63 |
-
- NeRF-DS (NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects, CVPR 2023)
|
| 64 |
-
- Neu3D (Neural 3D Video Synthesis from Multi-view Video, CVPR 2022)
|
| 65 |
-
- Google Immersive (Immersive Light Field Video with a Layered Mesh Representation, SIGGRAPH 2020 Technical Paper)
|
| 66 |
-
- Technicolor Light Field (Dataset and Pipeline for Multi-View Light-Field Video, CVPRW 2017)
|
| 67 |
|
| 68 |
These benchmarks allow for quantitative evaluation of segmentation accuracy (mIoU and mAcc) in novel view synthesis for dynamic scenes, which was previously lacking in the field.
|
| 69 |
|
|
@@ -86,65 +85,44 @@ For the full license text, please visit: https://creativecommons.org/licenses/by
|
|
| 86 |
The Mask-Benchmark incorporates data derived from multiple source datasets, each with their own license terms that must be respected:
|
| 87 |
|
| 88 |
### 1. Neural 3D Video Dataset (Neu3D)
|
| 89 |
-
|
| 90 |
Licensed under CC-BY-NC 4.0.
|
| 91 |
|
| 92 |
### 2. HyperNeRF Dataset
|
| 93 |
-
|
| 94 |
Licensed under Apache License 2.0.
|
| 95 |
|
| 96 |
### 3. NeRF-DS Dataset
|
| 97 |
-
|
| 98 |
Licensed under Apache License 2.0.
|
| 99 |
|
| 100 |
### 4. Google Immersive Dataset
|
|
|
|
| 101 |
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
### 5. InterDigital Light-Field Dataset
|
| 105 |
-
|
| 106 |
**INTERDIGITAL LIGHT-FIELD DATASET RELEASE AGREEMENT**
|
| 107 |
|
| 108 |
The goal of the InterDigital Light-Field dataset is to contribute to the development and assessment of new techniques, technology, and algorithms for Light-Field video processing. InterDigital has copyright and all rights of authorship on the dataset and is the principal distributor of the Light-Field dataset.
|
| 109 |
|
| 110 |
-
**RELEASE OF THE DATASET**
|
| 111 |
-
|
| 112 |
-
To advance the state-of-the-art in Light-Field video processing and editing, the InterDigital Light-Field dataset is made available to the researcher community for scientific research only. All other uses of the InterDigital Light-Field dataset will be considered on a case-by-case basis. To receive a copy of the Light-Field dataset, the requestor must agree to observe all of these Terms of use.
|
| 113 |
-
|
| 114 |
**CONSENT**
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
3. **Publication Requirements**: In no case should the still frames or videos be used in any way that could directly or indirectly harm InterDigital. InterDigital permits publication (paper or web-based) of the data for scientific purposes only. Any other publication without scientific and academic value is strictly prohibited.
|
| 123 |
-
|
| 124 |
-
4. **Citation/Reference**: All documents and papers that report on research that uses the InterDigital Light-Field dataset must acknowledge the use of the dataset by including an appropriate citation to the followings:
|
| 125 |
-
|
| 126 |
-
*Dataset and Pipeline for Multi-View Light-Field Video*. N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, T. Langlois, R. Gendrot, O. Bureller, A. Schubert, and V. Allie. CVPR Workshops, 2017.
|
| 127 |
-
|
| 128 |
-
5. **No Warranty**: THE PROVIDER OF THE DATA MAKES NO REPRESENTATIONS AND EXTENDS NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED. THERE ARE NO EXPRESS OR IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, OR THAT THE USE OF THE MATERIAL WILL NOT INFRINGE ANY PATENT, COPYRIGHT, TRADEMARK, OR OTHER PROPRIETARY RIGHTS.
|
| 129 |
|
| 130 |
## Using the Mask-Benchmark Dataset
|
| 131 |
|
| 132 |
By using the Mask-Benchmark dataset, you agree to:
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
4. Use the dataset for scientific and research purposes only
|
| 138 |
|
| 139 |
# BibTex
|
| 140 |
-
```
|
| 141 |
@article{li2024sadg,
|
| 142 |
title={SADG: Segment Any Dynamic Gaussian Without Object Trackers},
|
| 143 |
author={Li, Yun-Jin and Gladkova, Mariia and Xia, Yan and Cremers, Daniel},
|
| 144 |
journal={arXiv preprint arXiv:2411.19290},
|
| 145 |
year={2024}
|
| 146 |
}
|
| 147 |
-
```
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: cc-by-nc-4.0
|
| 5 |
+
task_categories:
|
| 6 |
+
- image-segmentation
|
| 7 |
tags:
|
| 8 |
- novel view synthesis
|
| 9 |
- dynamic scene novel view segmentation
|
|
|
|
| 17 |
- google-immersive
|
| 18 |
- technicolor-light-field
|
| 19 |
configs:
|
| 20 |
+
- config_name: HyperNeRF-Mask
|
| 21 |
+
data_files:
|
| 22 |
+
- split: test
|
| 23 |
+
path:
|
| 24 |
+
- Mask-Benchmark/HyperNeRF-Mask/*/gt_masks/*.png
|
| 25 |
+
- config_name: NeRF-DS-Mask
|
| 26 |
+
data_files:
|
| 27 |
+
- split: test
|
| 28 |
+
path:
|
| 29 |
+
- Mask-Benchmark/NeRF-DS-Mask/*/gt_masks/*.png
|
| 30 |
+
- config_name: Neu3D-Mask
|
| 31 |
+
data_files:
|
| 32 |
+
- split: test
|
| 33 |
+
path:
|
| 34 |
+
- Mask-Benchmark/Neu3D-Mask/*/gt_masks/*.png
|
| 35 |
+
- config_name: Immersive-Mask
|
| 36 |
+
data_files:
|
| 37 |
+
- split: test
|
| 38 |
+
path:
|
| 39 |
+
- Mask-Benchmark/Immersive-Mask/*/gt_masks/*.png
|
| 40 |
+
- config_name: Technicolor-Mask
|
| 41 |
+
data_files:
|
| 42 |
+
- split: test
|
| 43 |
+
path:
|
| 44 |
+
- Mask-Benchmark/Technicolor-Mask/*/gt_masks/*.png
|
| 45 |
+
- config_name: default
|
| 46 |
+
data_files:
|
| 47 |
+
- split: test
|
| 48 |
+
path:
|
| 49 |
+
- Mask-Benchmark/*/*/gt_masks/*.png
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
---
|
| 51 |
|
| 52 |
# Mask-Benchmark Dataset
|
| 53 |
|
| 54 |
+
[**Project Page**](https://yunjinli.github.io/project-sadg/) | [**Paper**](https://huggingface.co/papers/2411.19290) | [**Code**](https://github.com/yunjinli/SADG-SegmentAnyDynamicGaussian)
|
| 55 |
+
|
| 56 |
+
This repository contains the dynamic scene novel-view segmentation benchmarks used in the paper "**TRASE: Tracking-free 4D Segmentation and Editing**" (also referred to as "**SADG: Segment Any Dynamic Gaussian Without Object Trackers**"). The benchmarks are designed for evaluating segmentation performance in dynamic novel view synthesis across various datasets.
|
| 57 |
|
| 58 |
## Overview
|
| 59 |
|
| 60 |
The Mask-Benchmark dataset provides ground truth segmentation masks for multiple dynamic scene datasets, including:
|
| 61 |
+
- **HyperNeRF** (A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields, ACM Transactions on Graphics (TOG))
|
| 62 |
+
- **NeRF-DS** (NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects, CVPR 2023)
|
| 63 |
+
- **Neu3D** (Neural 3D Video Synthesis from Multi-view Video, CVPR 2022)
|
| 64 |
+
- **Google Immersive** (Immersive Light Field Video with a Layered Mesh Representation, SIGGRAPH 2020 Technical Paper)
|
| 65 |
+
- **Technicolor Light Field** (Dataset and Pipeline for Multi-View Light-Field Video, CVPRW 2017)
|
| 66 |
|
| 67 |
These benchmarks allow for quantitative evaluation of segmentation accuracy (mIoU and mAcc) in novel view synthesis for dynamic scenes, which was previously lacking in the field.
|
| 68 |
|
|
|
|
| 85 |
The Mask-Benchmark incorporates data derived from multiple source datasets, each with their own license terms that must be respected:
|
| 86 |
|
| 87 |
### 1. Neural 3D Video Dataset (Neu3D)
|
|
|
|
| 88 |
Licensed under CC-BY-NC 4.0.
|
| 89 |
|
| 90 |
### 2. HyperNeRF Dataset
|
|
|
|
| 91 |
Licensed under Apache License 2.0.
|
| 92 |
|
| 93 |
### 3. NeRF-DS Dataset
|
|
|
|
| 94 |
Licensed under Apache License 2.0.
|
| 95 |
|
| 96 |
### 4. Google Immersive Dataset
|
| 97 |
+
Refer to the original license terms provided by the Google Immersive project.
|
| 98 |
|
| 99 |
+
### 5. InterDigital Light-Field Dataset (Technicolor)
|
|
|
|
|
|
|
|
|
|
| 100 |
**INTERDIGITAL LIGHT-FIELD DATASET RELEASE AGREEMENT**
|
| 101 |
|
| 102 |
The goal of the InterDigital Light-Field dataset is to contribute to the development and assessment of new techniques, technology, and algorithms for Light-Field video processing. InterDigital has copyright and all rights of authorship on the dataset and is the principal distributor of the Light-Field dataset.
|
| 103 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 104 |
**CONSENT**
|
| 105 |
+
The researcher(s) agrees to restrictions including:
|
| 106 |
+
1. **Redistribution**: Shall not be further distributed without prior written approval.
|
| 107 |
+
2. **Modification and Non Commercial Use**: May not be modified or used for commercial purposes.
|
| 108 |
+
3. **Publication Requirements**: Permits publication for scientific purposes only.
|
| 109 |
+
4. **Citation/Reference**: All documents must acknowledge use by citing:
|
| 110 |
+
*Dataset and Pipeline for Multi-View Light-Field Video*. N. Sabater, et al. CVPR Workshops, 2017.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
|
| 112 |
## Using the Mask-Benchmark Dataset
|
| 113 |
|
| 114 |
By using the Mask-Benchmark dataset, you agree to:
|
| 115 |
+
1. Comply with the CC-BY-NC 4.0 license governing the overall dataset.
|
| 116 |
+
2. Adhere to all component dataset license terms listed above.
|
| 117 |
+
3. Properly cite both the Mask-Benchmark and the original source datasets.
|
| 118 |
+
4. Use the dataset for scientific and research purposes only.
|
|
|
|
| 119 |
|
| 120 |
# BibTex
|
| 121 |
+
```bibtex
|
| 122 |
@article{li2024sadg,
|
| 123 |
title={SADG: Segment Any Dynamic Gaussian Without Object Trackers},
|
| 124 |
author={Li, Yun-Jin and Gladkova, Mariia and Xia, Yan and Cremers, Daniel},
|
| 125 |
journal={arXiv preprint arXiv:2411.19290},
|
| 126 |
year={2024}
|
| 127 |
}
|
| 128 |
+
```
|
|
|
|
|
|
|
|
|