File size: 8,247 Bytes
176ba90 ab8de09 176ba90 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 |
---
license: cc-by-nc-4.0
configs:
- config_name: default
data_files: "split.csv"
---
# SpatialLM Dataset
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<picture>
<source srcset="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/_dK14CT3do8rBG3QrHUjN.png" media="(prefers-color-scheme: dark)">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/bAZyeIXOMVASHR6-xVlQU.png" width="60%" alt="SpatialLM""/>
</picture>
</div>
<hr style="margin-top: 0; margin-bottom: 8px;">
<div align="center" style="margin-top: 0; padding-top: 0; line-height: 1;">
<a href="https://manycore-research.github.io/SpatialLM" target="_blank" style="margin: 2px;"><img alt="Project"
src="https://img.shields.io/badge/🌐%20Website-SpatialLM-ffc107?color=42a5f5&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
<a href="https://arxiv.org/abs/2506.07491" target="_blank" style="margin: 2px;"><img alt="arXiv"
src="https://img.shields.io/badge/arXiv-Techreport-b31b1b?logo=arxiv&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
<a href="https://github.com/manycore-research/SpatialLM" target="_blank" style="margin: 2px;"><img alt="GitHub"
src="https://img.shields.io/badge/GitHub-SpatialLM-24292e?logo=github&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://huggingface.co/manycore-research/SpatialLM1.1-Qwen-0.5B" target="_blank" style="margin: 2px;"><img alt="Hugging Face"
src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-SpatialLM-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
<a href="https://huggingface.co/datasets/manycore-research/SpatialLM-Dataset" target="_blank" style="margin: 2px;"><img alt="Dataset"
src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-Dataset-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
<a href="https://huggingface.co/datasets/manycore-research/SpatialLM-Testset" target="_blank" style="margin: 2px;"><img alt="Dataset"
src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-Testset-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
</div>
The SpatialLM dataset is a large-scale, high-quality synthetic dataset designed by professional 3D designers and used for real-world production. It contains point clouds from 12,328 diverse indoor scenes comprising 54,778 rooms, each paired with rich ground-truth 3D annotations. SpatialLM dataset provides an additional valuable resource for advancing research in indoor scene understanding, 3D perception, and related applications. For more details about the dataset construction, annotations, and benchmark tasks, please refer to the [paper](https://arxiv.org/abs/2506.07491).
<table style="table-layout: fixed;">
<tr>
<td style="text-align: center; vertical-align: middle; width: 25%"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/YFQzBUC_sGufXqpGL6YhV.jpeg" alt="exmaple a" width="100%" style="display: block;"></td>
<td style="text-align: center; vertical-align: middle; width: 25%"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/jRbPzBwhtDMWUwueodYax.jpeg" alt="exmaple c" width="100%" style="display: block;"></td>
<td style="text-align: center; vertical-align: middle; width: 25%"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/DpNKunoD-2-1spx6cXDxa.jpeg" alt="exmaple b" width="100%" style="display: block;"></td>
<td style="text-align: center; vertical-align: middle; width: 25%"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/o-JgD-oY0oK0yhryWUexv.jpeg" alt="exmaple d" width="100%" style="display: block;"></td>
</tr>
</tr>
</table>
## Dataset Structure
The dataset is organized into the following folder structure:
```bash
SpatialLM-Dataset/
├── pcd/ # Point cloud PLY files for rooms
│ └── .ply
├── layout/ # GT room layout
│ └── .txt
├── examples/ # 10 point cloud and layout examples
│ └── .ply
│ └── .txt
├── extract.sh # Extraction script
├── dataset_info.json # Dataset configuration file for training
├── spatiallm_train.json # SpatialLM conversations data for training
├── spatiallm_val.json # SpatialLM conversations data for validation
├── spatiallm_test.json # SpatialLM conversations data for testing
└── split.csv # Metadata CSV file
```
## Metadata
The dataset metadata is provided in the `split.csv` file with the following columns:
- **id**: Unique identifier for each sampled point cloud and layout following the naming convention `{scene_id}_{room_id}_{sample}` (e.g., `scene_001523_00_2`)
- **room_type**: The functional type of each room (e.g., bedroom, living room)
- **scene_id**: Unique identifier for multi-room apartment scenes
- **room_id**: Unique identifier for individual rooms within a scene
- **sample**: Point cloud sampling configuration for each room (4 types available):
- **0**: Most complete observations (8 panoramic views randomly sampled)
- **1**: Most sparse observations (8 perspective views randomly sampled)
- **2**: Less complete observations (16 perspective views randomly sampled)
- **3**: Less sparse observations (24 perspective views randomly sampled)
- **split**: Dataset partition assignment (`train`, `val`, `test`, `reserved`)
The dataset is divided into 11,328/500/500 scenes for train/val/test splits, and 199,286/500/500 sampled point clouds accordingly, where multiple point cloud samples of the same room are randomly selected for the val/test splits for simplicity.
## Data Extraction
Point clouds and layouts are compressed in zip files. To extract the files, run the following script:
```bash
cd SpatialLM-Dataset
chmod +x extract.sh
./extract.sh
```
## Conversation Format
The `spatiallm_train.json`, `spatiallm_val.json`, and `spatiallm_test.json` data follows the **SpatialLM format** with ShareGPT-style conversations:
```json
{
"conversations": [
{
"from": "human",
"value": "<point_cloud>Detect walls, doors, windows, boxes. The reference code is as followed: ..."
},
{
"from": "gpt",
"value": "<|layout_s|>wall_0=...<|layout_e|>"
}
],
"point_clouds": ["pcd/ID.ply"]
}
```
## Usage
Use the [SpatialLM code base](https://github.com/manycore-research/SpatialLM/tree/main) for reading the point cloud and the layout data.
```python
from spatiallm import Layout
from spatiallm.pcd import load_o3d_pcd
# Load Point Cloud
point_cloud = load_o3d_pcd(args.point_cloud)
# Load Layout
with open(args.layout, "r") as f:
layout_content = f.read()
layout = Layout(layout_content)
```
## Visualization
Use `rerun` to visualize the point cloud and the GT structured 3D layout output:
```bash
python visualize.py --point_cloud examples/scene_008456_00_3.ply --layout examples/scene_008456_00_3.txt --save scene_008456_00_3.rrd
rerun scene_008456_00_3.rrd
```
## SpatialGen dataset
For access to photorealistic RGB/Depth/Normal/Semantic/Instance panoramic renderings and camera trajectories used to generate the SpatialLM point clouds, please refer to the [SpatialGen project](https://manycore-research.github.io/SpatialGen) for more details.
## Citation
If you find this work useful, please consider citing:
```bibtex
@inproceedings{SpatialLM,
title = {SpatialLM: Training Large Language Models for Structured Indoor Modeling},
author = {Mao, Yongsen and Zhong, Junhao and Fang, Chuan and Zheng, Jia and Tang, Rui and Zhu, Hao and Tan, Ping and Zhou, Zihan},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025}
}
```
|