File size: 5,197 Bytes
0c5ad5f
 
 
909cbca
0c5ad5f
 
 
 
 
 
 
 
 
 
 
 
 
 
f382888
0c5ad5f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
909cbca
0c5ad5f
0954294
99509bc
2430393
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99509bc
431c765
 
 
 
 
 
 
 
 
 
 
 
 
 
0954294
0c5ad5f
 
 
 
 
 
 
 
 
 
 
 
e853924
0c5ad5f
 
 
987a560
0c5ad5f
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
language:
- en
license: cc-by-4.0
tags:
- multiple-modality
- industrial-scene
- custom-dataset
size_categories:
- 10G<n<100G
source_datasets:
- original
---

# InspecSafe-V1

## Overview

**InspecSafe-V1** is a high-quality, multimodal annotated dataset designed for **world model construction and analysis in industrial environments**. The data was collected from real-world inspection robots deployed across industrial sites and has been carefully cleaned and standardized for research and applications in predictive world modeling for industrial scenarios.

The dataset covers five representative industrial settings: tunnels, power facilities, sintering equipment, oil/gas/chemical plants, and coal conveyor galleries. It was constructed using data from 41 wheeled or rail-mounted inspection robots operating at 2,239 valid inspection waypoints. At each waypoint, eight synchronized modalities are provided:

- Visible-light video  
- Infrared video  
- Audio  
- Depth point clouds  
- LiDAR point clouds  
- Gas concentration readings  
- Temperature  
- Humidity  

Additionally, pixel-level polygonal segmentation annotations are provided for industrial objects in visible-light images. To support downstream tasks, each sample is also accompanied by a semantic scene description and a corresponding safety-level label based on real inspection protocols.

## Dataset Format

The dataset is divided into a training set and a test set, both of which are organized in a structured directory layout with aligned multimodal streams and annotations. An overview of the train data structure is shown below:


```
train
├── Annotations
│   ├── Normal_data
│   │   ├── 58132919741960_20251117_0.4kv fadianjikongzhigui  # Format: collection robot MAC address_collection time_point name
│   │   │   ├──58132919741960_20251117_0.4kv fadianjikongzhigui_visible_2025111719_frame_000001.jpg  # A normal image.
│   │   │   ├──58132919741960_20251117_0.4kv fadianjikongzhigui_visible_2025111719_frame_000001.json  # The annotation of the normal image.
│   │   │   ├──58132919741960_20251117_0.4kv fadianjikongzhigui_visible_2025111719_frame_000001.txt   # The semantical description of the normal image.
│   │   │   └── ...
│   │   └── ...
│   └── Anomaly_data
│       ├── hand_frame_000001
│       │   ├── hand_frame_000001.jpg  # An anomaly image.
│       │   ├── hand_frame_000001.json  # The annotation file of the anomaly image.
│       │   └── hand_frame_000001.txt  # The semantical description of the anomaly image.
│       └── ...
├── Other_modalities
│   ├── 58132919741960_20251117_0.4kv fadianjikongzhigui  # Format: collection robot MAC address_collection time_point name
│   │   ├── 0.4kv fadianjikongzhigui_visible_2025103008.mp4  # RGB video. Format: point name_modality type_timestamp
│   │   ├── 0.4kv fadianjikongzhigui_infrared_2025103008.mp4  # Infrared video.
│   │   ├── 0.4kv fadianjikongzhigui_sensor_2025103008.txt  # Other environmental sensing data including gas, temperature and humidity.
│   │   ├── 0.4kv fadianjikongzhigui_point_cloud_2025103008.bag  # Point cloud.
│   │   └── 0.4kv fadianjikongzhigui_audio_2025103008.wav  # Sound.
│   └── ...
└── Parameters  # Sensor extrinsic parameters, software and hardware parameters, etc.
    ├── Hardware  # Hardware-related information parameters.
    ├── Device_A.json  # Parameters of Device A.
    ├── Device_B.json  # Parameters of Device B.
    └── ...
```


### Notes:
- **File naming convention**: `{collection_robot_MAC_address}_{collection_time}_{point_name}`.
- **Annotations**:
  - `.json`: Contains pixel-level polygonal segmentation masks for industrial objects.
  - `.txt`: Contains human-readable semantic descriptions of the scene.
- **Other modalities**:
  - Videos: `.mp4` (RGB / IR)
  - Sensor logs: `.txt` (gas, temp, humidity)
  - Point clouds: `.bag` (ROS bag format)
  - Audio: `.wav`
- **Parameters**: Includes extrinsic calibration, hardware specs, and software settings for accurate multimodal fusion.

> This structure ensures synchronized access across all modalities and supports both supervised learning and world modeling tasks. Each sample metadata (e.g., robot ID, location, timestamp, safety label) is stored in JSON format. Segmentation masks are provided as PNG images with instance IDs matching the annotation JSON.

## Data Splits

| Split   | Number of Samples |
|---------|-------------------|
| train   | 3,763             |
| test    | 1,250             |

> Note: The dataset does not include a separate validation split; users are encouraged to create one from the training set as needed.

## License

This dataset is released under the [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/).

## Acknowledgements

We thank the multimodal recognition algorithm team who contributed to data collection and annotation. This work was supported by TetraBOT.

---

For questions or contributions, please open an issue in the repository.