Improve dataset card: Add task categories, tags, paper, code and project page links, and sample usage
Browse filesThis PR significantly enhances the dataset card for SIU3R by adding relevant metadata and external links for improved discoverability and usability:
- **Metadata**: Added `task_categories` (`image-to-3d`, `image-segmentation`, `text-retrieval`) and specific `tags` (`3d-reconstruction`, `semantic-segmentation`, `instance-segmentation`, `panoptic-segmentation`, `referring-segmentation`, `scannet`, `english`) to clearly define the dataset's domain and contents.
- **Paper Link**: Updated the paper link in the introductory sentence to point to the official Hugging Face paper page: https://huggingface.co/papers/2507.02705.
- **External Links**: Added explicit links to the [project page](https://insomniaaac.github.io/siu3r/) and the [GitHub repository](https://github.com/WU-CVGL/SIU3R).
- **Sample Usage**: Included a "Sample Usage" section with code snippets for inference, directly sourced from the project's GitHub README, to guide users on how to interact with the model/dataset.
These updates provide a more complete and user-friendly dataset card for the community.
|
@@ -1,7 +1,24 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
# Pretrained Models for SIU3R
|
| 6 |
We provide pretrained models for the Panoptic Segmentation task. We train MASt3R backbone with adapter on the COCO dataset for SIU3R initialization.
|
| 7 |
|
|
@@ -135,7 +152,7 @@ For refer segmentation task, we provide the refer segmentation annotations in tr
|
|
| 135 |
[49406, 589, 533, 320, 1538, 2175, 269, 997, 631, 2097, 2866, 12033, 2403, 585, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
|
| 136 |
[49406, 589, 533, 320, 1538, 2175, 269, 585, 533, 13589, 638, 12033, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
|
| 137 |
[49406, 997, 533, 320, 3638, 2175, 530, 518, 1530, 269, 585, 791, 2581, 12033, 8525, 705, 531, 585, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
|
| 138 |
-
[49406, 320, 2866, 2175, 267, 9729, 530, 518, 3694, 539, 518, 1530, 267, 525, 518, 1823,
|
| 139 |
[49406, 589, 533, 320, 2866, 2175, 269, 585, 533, 13589, 638, 4135, 320, 1939, 11840, 12033, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
|
| 140 |
]
|
| 141 |
},
|
|
@@ -146,6 +163,20 @@ For refer segmentation task, we provide the refer segmentation annotations in tr
|
|
| 146 |
```
|
| 147 |
The "scene0011_00" field is the scan name, the "2" field is the object id (also instance_label), the "object_name" field is the object name, the "instance_label_id" field is the semantic label id in instance segmentation task, the "panoptic_label_id" field is the semantic label id in panoptic segmentation task, the "frame_id" field is the frame ids of images which contain this object, the "text" field is the refer segmentation text description, and the "text_token" field is the tokenized refer segmentation text by openclip (https://github.com/mlfoundations/open_clip), note that we use `convnext_large_d_320` model (https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup). The refer segmentation task is to segment the object in the image based on the refer segmentation text. This part of data is obtained from the uniseg3d repository (https://github.com/dk-liang/UniSeg3D), thanks for their great work.
|
| 148 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 149 |
# Citation
|
| 150 |
If you find our work useful, please consider citing our paper:
|
| 151 |
```bibtex
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-to-3d
|
| 5 |
+
- image-segmentation
|
| 6 |
+
- text-retrieval
|
| 7 |
+
tags:
|
| 8 |
+
- 3d-reconstruction
|
| 9 |
+
- semantic-segmentation
|
| 10 |
+
- instance-segmentation
|
| 11 |
+
- panoptic-segmentation
|
| 12 |
+
- referring-segmentation
|
| 13 |
+
- scannet
|
| 14 |
+
- english
|
| 15 |
---
|
| 16 |
+
|
| 17 |
+
This is the official Hugging Face repository for [SIU3R: Simultaneous Scene Understanding and 3D Reconstruction Beyond Feature Alignment](https://huggingface.co/papers/2507.02705).
|
| 18 |
+
|
| 19 |
+
Project Page: https://insomniaaac.github.io/siu3r/
|
| 20 |
+
Code: https://github.com/WU-CVGL/SIU3R
|
| 21 |
+
|
| 22 |
# Pretrained Models for SIU3R
|
| 23 |
We provide pretrained models for the Panoptic Segmentation task. We train MASt3R backbone with adapter on the COCO dataset for SIU3R initialization.
|
| 24 |
|
|
|
|
| 152 |
[49406, 589, 533, 320, 1538, 2175, 269, 997, 631, 2097, 2866, 12033, 2403, 585, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
|
| 153 |
[49406, 589, 533, 320, 1538, 2175, 269, 585, 533, 13589, 638, 12033, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
|
| 154 |
[49406, 997, 533, 320, 3638, 2175, 530, 518, 1530, 269, 585, 791, 2581, 12033, 8525, 705, 531, 585, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
|
| 155 |
+
[49406, 320, 2866, 2175, 267, 9729, 530, 518, 3694, 539, 518, 1530, 267, 525, 518, 1823, 530, 518, 5407, 539, 1093, 269, 518, 1155, 631, 275, 2866, 12033, 269, 518, 2184, 533, 320, 2866, 2489, 593, 1395, 10485, 525, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
|
| 156 |
[49406, 589, 533, 320, 2866, 2175, 269, 585, 533, 13589, 638, 4135, 320, 1939, 11840, 12033, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
|
| 157 |
]
|
| 158 |
},
|
|
|
|
| 163 |
```
|
| 164 |
The "scene0011_00" field is the scan name, the "2" field is the object id (also instance_label), the "object_name" field is the object name, the "instance_label_id" field is the semantic label id in instance segmentation task, the "panoptic_label_id" field is the semantic label id in panoptic segmentation task, the "frame_id" field is the frame ids of images which contain this object, the "text" field is the refer segmentation text description, and the "text_token" field is the tokenized refer segmentation text by openclip (https://github.com/mlfoundations/open_clip), note that we use `convnext_large_d_320` model (https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup). The refer segmentation task is to segment the object in the image based on the refer segmentation text. This part of data is obtained from the uniseg3d repository (https://github.com/dk-liang/UniSeg3D), thanks for their great work.
|
| 165 |
|
| 166 |
+
## Sample Usage
|
| 167 |
+
To run inference with the SIU3R model using this dataset, you first need to download the pre-trained model checkpoint and place it in the `pretrained_weights` directory (as described in the [GitHub repository](https://github.com/WU-CVGL/SIU3R)).
|
| 168 |
+
|
| 169 |
+
Then, you can run the inference script:
|
| 170 |
+
```bash
|
| 171 |
+
python inference.py --image_path1 <path_to_image1> --image_path2 <path_to_image2> --output_path <output_directory> [--cx <cx_value>] [--cy <cy_value>] [--fx <fx_value>] [--fy <fy_value>]
|
| 172 |
+
```
|
| 173 |
+
A `output.ply` will be generated in the specified output directory, containing the reconstructed gaussian splattings. The `cx`, `cy`, `fx`, and `fy` parameters are optional and can be used to specify the camera intrinsics. If not provided, default values will be used.
|
| 174 |
+
|
| 175 |
+
You can view the results in the online viewer by running:
|
| 176 |
+
```bash
|
| 177 |
+
python viewer.py --output_ply <output_directory/output.ply>
|
| 178 |
+
```
|
| 179 |
+
|
| 180 |
# Citation
|
| 181 |
If you find our work useful, please consider citing our paper:
|
| 182 |
```bibtex
|