InspecSafe-V1 / README.md
tetrabot's picture
Update README.md
99509bc verified
---
language:
- en
license: mit
tags:
- multiple-modality
- industrial-scene
- custom-dataset
size_categories:
- 10G<n<100G
source_datasets:
- original
---
# InspecSafe-V1
## Overview
**InspecSafe-V1** is a high-quality, multimodal annotated dataset designed for **world model construction and analysis in industrial environments**. The data was collected from real-world inspection robots deployed across industrial sites and has been carefully cleaned and standardized for research and applications in predictive world modeling for industrial scenarios.
The dataset covers five representative industrial settings: tunnels, power facilities, sintering equipment, oil/gas/chemical plants, and coal conveyor galleries. It was constructed using data from 41 wheeled or rail-mounted inspection robots operating at 2,239 valid inspection waypoints. At each waypoint, eight synchronized modalities are provided:
- Visible-light video
- Infrared video
- Audio
- Depth point clouds
- LiDAR point clouds
- Gas concentration readings
- Temperature
- Humidity
Additionally, pixel-level polygonal segmentation annotations are provided for industrial objects in visible-light images. To support downstream tasks, each sample is also accompanied by a semantic scene description and a corresponding safety-level label based on real inspection protocols.
## Dataset Format
The dataset is divided into a training set and a test set, both of which are organized in a structured directory layout with aligned multimodal streams and annotations. An overview of the data structure is shown below:
```
Annotations/
├── Normal_data/
│ └── 58132919741960_20251117_0.4kv_fadianjikongzhigui/
│ ├── 58132919741960_20251117_0.4kv_fadianjikongzhigui_visible_2025111719_frame_000001.jpg # Normal visible image
│ ├── 58132919741960_20251117_0.4kv_fadianjikongzhigui_visible_2025111719_frame_000001.json # Polygon segmentation annotation
│ └── 58132919741960_20251117_0.4kv_fadianjikongzhigui_visible_2025111719_frame_000001.txt # Semantic scene description
├── Anomaly_data/
│ └── hand_frame_000001/
│ ├── hand_frame_000001.jpg # Anomaly visible image
│ ├── hand_frame_000001.json # Annotation for anomaly
│ └── hand_frame_000001.txt # Semantic description of anomaly
├── Other_modalities/
│ └── 58132919741960_20251117_0.4kv_fadianjikongzhigui/
│ ├── 0.4kv_fadianjikongzhigui_visible_2025103008.mp4 # RGB video
│ ├── 0.4kv_fadianjikongzhigui_infrared_2025103008.mp4 # Infrared video
│ ├── 0.4kv_fadianjikongzhigui_sensor_2025103008.txt # Gas, temperature, humidity logs
│ ├── 0.4kv_fadianjikongzhigui_point_cloud_2025103008.bag # LiDAR/depth point cloud (ROS bag)
│ └── 0.4kv_fadianjikongzhigui_audio_2025103008.wav # Audio recording
└── Parameters/
└── Hardware/
├── Device_A.json # Hardware specs for Device A
└── Device_B.json # Hardware specs for Device B
```
### Notes:
- **File naming convention**: `{collection_robot_MAC_address}_{collection_time}_{point_name}`.
- **Annotations**:
- `.json`: Contains pixel-level polygonal segmentation masks for industrial objects.
- `.txt`: Contains human-readable semantic descriptions of the scene.
- **Other modalities**:
- Videos: `.mp4` (RGB / IR)
- Sensor logs: `.txt` (gas, temp, humidity)
- Point clouds: `.bag` (ROS bag format)
- Audio: `.wav`
- **Parameters**: Includes extrinsic calibration, hardware specs, and software settings for accurate multimodal fusion.
> This structure ensures synchronized access across all modalities and supports both supervised learning and world modeling tasks. Each sample metadata (e.g., robot ID, location, timestamp, safety label) is stored in JSON format. Segmentation masks are provided as PNG images with instance IDs matching the annotation JSON.
## Data Splits
| Split | Number of Samples |
|---------|-------------------|
| train | 3,763 |
| test | 1,250 |
> Note: The dataset does not include a separate validation split; users are encouraged to create one from the training set as needed.
## License
This dataset is released under the [MIT License](https://opensource.org/licenses/MIT).
## Acknowledgements
We thank the multimodal recognition algorithm team who contributed to data collection and annotation. This work was supported by TetraBOT.
---
For questions or contributions, please open an issue in the repository.