Alvin16 commited on
Commit
2b0bec7
·
verified ·
1 Parent(s): ef7a371

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +138 -151
README.md CHANGED
@@ -1,151 +1,138 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- task_categories:
4
- - image-segmentation
5
- - 3D-segmentation
6
- - object-detection
7
- - pose-estimation
8
- - depth-estimation
9
- - multi-modal fusion
10
- tags:
11
- - spacecraft
12
- - satellite
13
- - semantic-segmentation
14
- - pose-estimation
15
- - point-cloud
16
- - multi-modal
17
- - simulation
18
- - space
19
- - AirSim
20
- - Unreal-Engine
21
- pretty_name: "SpaceSense-Bench"
22
- size_categories:
23
- - 10K<n<100K
24
- ---
25
-
26
- # SpaceSense-Bench: Multi-Modal Spacecraft Perception and Pose Estimation Dataset
27
-
28
- A high-fidelity simulation-based multi-modal(RGB, Depth, LiDAR Point Cloud) dataset for spacecraft component-level semantic understanding, containing **136 satellite models** with synchronized multi-modal data.
29
-
30
- **Toolkit & Code:** [https://github.com/wuaodi/SpaceSense-Bench](https://github.com/wuaodi/SpaceSense-Bench)
31
-
32
- ## Dataset Overview
33
-
34
- | Item | Detail |
35
- |------|--------|
36
- | Satellite Models | 136 (sourced from NASA/ESA 3D models) |
37
- | Data Modalities | RGB, Depth, Semantic Segmentation, LiDAR Point Cloud, 6-DoF Pose |
38
- | Image Resolution | 1024 x 1024 |
39
- | Camera FOV | 50 degrees |
40
- | Semantic Classes | 7 (main_body, solar_panel, dish_antenna, omni_antenna, payload, thruster, adapter_ring) |
41
- | Simulation Platform | Unreal Engine 5.2.0 + AirSim 1.8.1 |
42
-
43
- ## Data Modalities
44
-
45
- | Modality | Format | Unit / Range | Description |
46
- |----------|--------|-------------|-------------|
47
- | RGB | PNG (1024x1024) | 8-bit color | Scene rendering |
48
- | Depth | PNG (1024x1024) | int32, millimeters (0 ~ 10,000,000 mm, background = 10,000 m) | Per-pixel depth map |
49
- | Semantic Segmentation | PNG (1024x1024) | uint8, class ID per pixel (0 = background) | Component-level segmentation mask |
50
- | LiDAR Point Cloud | ASC (x y z per line) | meters, 3 decimal places | Sparse 3D point cloud |
51
- | 6-DoF Pose | CSV | meters + Hamilton quaternion (w,x,y,z) | Camera-to-target relative pose |
52
-
53
- ## Coordinate System & Units
54
-
55
- | Item | Convention |
56
- |------|-----------|
57
- | Camera Frame | X-forward, Y-right, Z-down (right-hand system) |
58
- | World Frame | AirSim NED, target spacecraft fixed at origin |
59
- | Quaternion | Hamilton convention: w + xi + yj + zk |
60
- | Euler Angles | ZYX intrinsic (Yaw-Pitch-Roll) |
61
- | Position | meters (m), 6 decimal places |
62
- | Depth Map | millimeters (mm), int32; deep space background = 10,000 m |
63
- | LiDAR | meters (m), .asc format (x y z), 3 decimal places |
64
- | Timestamp | YYYYMMDDHHMMSSmmm |
65
-
66
- ## Sensor Configuration
67
-
68
- ### Camera (cam0)
69
-
70
- - Resolution: 1024 x 1024
71
- - FOV: 50 degrees
72
- - Image types captured: RGB (type 0), Segmentation (type 5), Depth (type 2)
73
- - TargetGamma: 1.0
74
-
75
- ### LiDAR
76
-
77
- - Range: 300 m
78
- - Channels: 256
79
- - Vertical FOV: -20 to +20 degrees
80
- - Horizontal FOV: -20 to +20 degrees
81
- - Data frame: SensorLocalFrame
82
-
83
- ## Data Split (Zero-shot / OOD)
84
-
85
- The training and validation sets contain **completely non-overlapping satellite models**, so validation performance reflects zero-shot generalization to unseen spacecraft.
86
-
87
- | Split | Satellites | Rule |
88
- |-------|----------:|------|
89
- | Train | 117 | All satellites excluding val and excluded |
90
- | Test | 14 | Every 10th by index: seq 00, 10, 20, ..., 130 |
91
- | Validation | 5 | Seq 131-135, reserved for future testing |
92
-
93
- **Test satellites (14):**
94
- ACE (00), CALIPSO (10), Dawn (20), ExoMars_TGO (30), GRAIL (40), Integral (50), LADEE (60), Lunar_Reconnaissance_Orbiter (70), Mercury_Magnetospheric_Orbiter (80), OSIRIS_REX (90), Proba_2 (100), SOHO (110), Suomi_NPP (120), Ulysses (130)
95
-
96
- **Validation satellites (5):**
97
- Van_Allen_Probe (131), Venus_Express (132), Voyager (133), WIND (134), XMM_newton (135)
98
-
99
- ## Data Organization
100
-
101
- Each `.tar.gz` file in the `raw/` folder contains data for one satellite:
102
-
103
- ```
104
- <timestamp>_<satellite_name>/
105
- ├── approach_front/
106
- │ ├── rgb/ # RGB images (.png)
107
- │ ├── depth/ # Depth maps (.png, int32, mm)
108
- │ ├── segmentation/ # Semantic masks (.png, uint8)
109
- │ ├── lidar/ # Point clouds (.asc)
110
- │ └── poses.csv # 6-DoF poses
111
- ├── approach_back/
112
- ├── orbit_xy/
113
- └── ...
114
- ```
115
-
116
-
117
- ## Semantic Class Definitions
118
-
119
- | Class ID | Name | Description |
120
- |:--------:|------|-------------|
121
- | 0 | background | Deep space background |
122
- | 1 | main_body | Spacecraft main body / bus |
123
- | 2 | solar_panel | Solar panels / solar arrays |
124
- | 3 | dish_antenna | Dish / parabolic antennas |
125
- | 4 | omni_antenna | Omnidirectional antennas / booms |
126
- | 5 | payload | Scientific instruments / payloads |
127
- | 6 | thruster | Thrusters / propulsion systems |
128
- | 7 | adapter_ring | Launch adapter rings |
129
-
130
- ## Usage
131
-
132
- ### Format Conversion
133
-
134
- Use the toolkit at [https://github.com/wuaodi/SpaceSense-Bench](https://github.com/wuaodi/SpaceSense-Bench) to convert raw data to standard formats:
135
-
136
- ```bash
137
- # Convert to Semantic-KITTI (3D point cloud segmentation)
138
- python convert/airsim_to_semantickitti.py --raw-data ./data/raw --output ./kitti_data
139
-
140
- # Convert to YOLO (2D object detection)
141
- python convert/airsim_to_yolo.py --raw-data ./data/raw --output ./yolo_data
142
-
143
- # Convert to MMSegmentation (2D semantic segmentation)
144
- python convert/airsim_to_mmseg.py --raw-data ./data/raw --output ./mmseg_data
145
- ```
146
-
147
-
148
-
149
- ## License
150
-
151
- This dataset is released under the [CC-BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. Non-commercial use only.
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ task_categories:
4
+ - image-segmentation
5
+ - 3D-segmentation
6
+ - object-detection
7
+ - pose-estimation
8
+ - depth-estimation
9
+ - multi-modal fusion
10
+ tags:
11
+ - spacecraft
12
+ - satellite
13
+ - semantic-segmentation
14
+ - pose-estimation
15
+ - point-cloud
16
+ - multi-modal
17
+ - simulation
18
+ - space
19
+ - AirSim
20
+ - Unreal-Engine
21
+ pretty_name: "SpaceSense-Bench"
22
+ size_categories:
23
+ - 10K<n<100K
24
+ ---
25
+
26
+ # SpaceSense-Bench: Multi-Modal Spacecraft Perception and Pose Estimation Dataset
27
+
28
+ A high-fidelity simulation-based multi-modal(RGB, Depth, LiDAR Point Cloud) dataset for spacecraft component-level semantic understanding, containing **136 satellite models** with synchronized multi-modal data.
29
+
30
+ **Toolkit & Code:** [https://github.com/wuaodi/SpaceSense-Bench](https://github.com/wuaodi/SpaceSense-Bench)
31
+
32
+ ## Dataset Overview
33
+
34
+ | Item | Detail |
35
+ |------|--------|
36
+ | Satellite Models | 136 (sourced from NASA/ESA 3D models) |
37
+ | Data Modalities | RGB, Depth, Semantic Segmentation, LiDAR Point Cloud, 6-DoF Pose |
38
+ | Image Resolution | 1024 x 1024 |
39
+ | Camera FOV | 50 degrees |
40
+ | Semantic Classes | 7 (main_body, solar_panel, dish_antenna, omni_antenna, payload, thruster, adapter_ring) |
41
+ | Simulation Platform | Unreal Engine 5.2.0 + AirSim 1.8.1 |
42
+
43
+ ## Data Modalities
44
+
45
+ | Modality | Format | Unit / Range | Description |
46
+ |----------|--------|-------------|-------------|
47
+ | RGB | PNG (1024x1024) | 8-bit color | Scene rendering |
48
+ | Depth | PNG (1024x1024) | int32, millimeters (0 ~ 10,000,000 mm, background = 10,000 m) | Per-pixel depth map |
49
+ | Semantic Segmentation | PNG (1024x1024) | uint8, class ID per pixel (0 = background) | Component-level segmentation mask |
50
+ | LiDAR Point Cloud | ASC (x y z per line) | meters, 3 decimal places | Sparse 3D point cloud |
51
+ | 6-DoF Pose | CSV | meters + Hamilton quaternion (w,x,y,z) | Camera-to-target relative pose |
52
+
53
+ ## Coordinate System & Units
54
+
55
+ | Item | Convention |
56
+ |------|-----------|
57
+ | Camera Frame | X-forward, Y-right, Z-down (right-hand system) |
58
+ | World Frame | AirSim NED, target spacecraft fixed at origin |
59
+ | Quaternion | Hamilton convention: w + xi + yj + zk |
60
+ | Euler Angles | ZYX intrinsic (Yaw-Pitch-Roll) |
61
+ | Position | meters (m), 6 decimal places |
62
+ | Depth Map | millimeters (mm), int32; deep space background = 10,000 m |
63
+ | LiDAR | meters (m), .asc format (x y z), 3 decimal places |
64
+ | Timestamp | YYYYMMDDHHMMSSmmm |
65
+
66
+ ## Sensor Configuration
67
+
68
+ ### Camera (cam0)
69
+
70
+ - Resolution: 1024 x 1024
71
+ - FOV: 50 degrees
72
+ - Image types captured: RGB (type 0), Segmentation (type 5), Depth (type 2)
73
+ - TargetGamma: 1.0
74
+
75
+ ### LiDAR
76
+
77
+ - Range: 300 m
78
+ - Channels: 256
79
+ - Vertical FOV: -20 to +20 degrees
80
+ - Horizontal FOV: -20 to +20 degrees
81
+ - Data frame: SensorLocalFrame
82
+
83
+ ## Data Split (Zero-shot / OOD)
84
+
85
+ The training and validation sets contain **completely non-overlapping satellite models**, so validation performance reflects zero-shot generalization to unseen spacecraft.
86
+
87
+ | Split | Satellites | Rule |
88
+ |-------|----------:|------|
89
+ | Train | 117 | All satellites excluding val and excluded |
90
+ | Test | 14 | Every 10th by index: seq 00, 10, 20, ..., 130 |
91
+ | Validation | 5 | Seq 131-135, reserved for future testing |
92
+
93
+ **Test satellites (14):**
94
+ ACE (00), CALIPSO (10), Dawn (20), ExoMars_TGO (30), GRAIL (40), Integral (50), LADEE (60), Lunar_Reconnaissance_Orbiter (70), Mercury_Magnetospheric_Orbiter (80), OSIRIS_REX (90), Proba_2 (100), SOHO (110), Suomi_NPP (120), Ulysses (130)
95
+
96
+ **Validation satellites (5):**
97
+ Van_Allen_Probe (131), Venus_Express (132), Voyager (133), WIND (134), XMM_newton (135)
98
+
99
+ ## Data Organization
100
+
101
+ Each `.tar.gz` file in the `raw/` folder contains data for one satellite:
102
+
103
+ ```
104
+ <timestamp>_<satellite_name>/
105
+ ├── approach_front/
106
+ │ ├── rgb/ # RGB images (.png)
107
+ │ ├── depth/ # Depth maps (.png, int32, mm)
108
+ │ ├── segmentation/ # Semantic masks (.png, uint8)
109
+ │ ├── lidar/ # Point clouds (.asc)
110
+ │ └── poses.csv # 6-DoF poses
111
+ ├── approach_back/
112
+ ├── orbit_xy/
113
+ └── ...
114
+ ```
115
+
116
+
117
+ ## Semantic Class Definitions
118
+
119
+ | Class ID | Name | Description |
120
+ |:--------:|------|-------------|
121
+ | 0 | background | Deep space background |
122
+ | 1 | main_body | Spacecraft main body / bus |
123
+ | 2 | solar_panel | Solar panels / solar arrays |
124
+ | 3 | dish_antenna | Dish / parabolic antennas |
125
+ | 4 | omni_antenna | Omnidirectional antennas / booms |
126
+ | 5 | payload | Scientific instruments / payloads |
127
+ | 6 | thruster | Thrusters / propulsion systems |
128
+ | 7 | adapter_ring | Launch adapter rings |
129
+
130
+ ## Usage
131
+
132
+ ### Format Conversion
133
+
134
+ Use the toolkit at [https://github.com/wuaodi/SpaceSense-Bench](https://github.com/wuaodi/SpaceSense-Bench) to convert raw data to standard formats
135
+
136
+ ## License
137
+
138
+ This dataset is released under the [CC-BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. Non-commercial use only.