| # Semantic2D Dataset | |
| A 2D lidar semantic segmentation dataset for mobile robotics applications. This is the first publicly available 2D lidar semantic segmentation dataset, featuring point-wise annotations for nine indoor object categories across twelve distinct environments. | |
| **Associated Paper:** *Semantic2D: Enabling Semantic Scene Understanding with 2D Lidar Alone* | |
| **Authors:** Zhanteng Xie, Yipeng Pan, Yinqiang Zhang, Jia Pan, Philip Dames | |
| **Institutions:** The University of Hong Kong, Temple University | |
| **Video:** https://youtu.be/P1Hsvj6WUSY | |
| **GitHub:** https://github.com/TempleRAIL/semantic2d | |
| --- | |
| ## Dataset Overview | |
| | Property | Value | | |
| |----------|-------| | |
| | Total Data Tuples | ~188,007 | | |
| | Recording Rate | 20 Hz | | |
| | Total Duration | ~131 minutes | | |
| | Environments | 12 indoor environments | | |
| | Buildings | 7 buildings (4 at Temple, 3 at HKU) | | |
| | LiDAR Sensors | 3 types | | |
| | Semantic Classes | 9 + background | | |
| ### Semantic Classes | |
| | ID | Class | Description | | |
| |----|-------|-------------| | |
| | 0 | Background/Other | Unclassified objects | | |
| | 1 | Chair | Chairs and seating | | |
| | 2 | Door | Doors and doorways | | |
| | 3 | Elevator | Elevator doors | | |
| | 4 | Person | Dynamic pedestrians | | |
| | 5 | Pillar | Structural pillars | | |
| | 6 | Sofa | Sofas and couches | | |
| | 7 | Table | Tables | | |
| | 8 | Trash bin | Trash cans | | |
| | 9 | Wall | Walls and partitions | | |
| ### LiDAR Sensors | |
| | Sensor | Robot | Location | Range (m) | FOV (deg) | Angular Res. (deg) | Points | | |
| |--------|-------|----------|-----------|-----------|-------------------|--------| | |
| | Hokuyo UTM-30LX-EW | Jackal | Temple | [0.1, 60] | 270 | 0.25 | 1,081 | | |
| | WLR-716 | Customized | HKU | [0.15, 25] | 270 | 0.33 | 811 | | |
| | RPLIDAR-S2 | Customized | HKU | [0.2, 30] | 360 | 0.18 | 1,972 | | |
| --- | |
| ## Directory Structure | |
| ``` | |
| semantic2d_dataset_2025/ | |
| ├── README.md | |
| │ | |
| ├── rosbags/ # Original ROS bag files, only used for reference or to extract other data, such as RGB, depth, goal, etc. | |
| │ ├── 2024-04-04-12-16-41.bag # Temple Engineering Lobby | |
| │ ├── 2024-04-04-13-48-45.bag # Temple Engineering 6th floor | |
| │ ├── 2024-04-04-14-18-32.bag # Temple Engineering 9th floor | |
| │ ├── 2024-04-04-14-50-01.bag # Temple Engineering 8th floor | |
| │ ├── 2024-04-11-14-37-14.bag # Temple Engineering 4th floor | |
| │ ├── 2024-04-11-15-24-29.bag # Temple Engineering Corridor | |
| │ ├── 2025-07-08-13-32-08.bag # Temple SERC Lobby | |
| │ ├── 2025-07-08-14-22-44.bag # Temple Gladfelter Lobby | |
| │ ├── 2025-07-18-17-43-11.bag # Temple Mazur Lobby | |
| │ ├── 2025-11-10-15-53-51.bag # HKU Chow Yei Ching 4th floor | |
| │ ├── 2025-11-11-21-27-17.bag # HKU Centennial Campus Lobby | |
| │ └── 2025-11-18-22-13-37.bag # HKU Jockey Club 3rd floor | |
| │ | |
| └── semantic2d_data/ # Semantic2D Segmentation Data files | |
| ├── dataset.txt # dataset index for each folder | |
| ├── 2024-04-04-12-16-41/ # Temple Engineering Lobby # Temple environments (Hokuyo) | |
| ├── 2024-04-04-13-48-45/ # Temple Engineering 6th floor | |
| ├── 2024-04-04-14-18-32/ # Temple Engineering 9th floor | |
| ├── 2024-04-04-14-50-01/ # Temple Engineering 8th floor | |
| ├── 2024-04-11-14-37-14/ # Temple Engineering 4th floor | |
| ├── 2024-04-11-15-24-29/ # Temple Engineering Corridor | |
| ├── 2025-07-08-13-32-08/ # Temple SERC Lobby | |
| ├── 2025-07-08-14-22-44/ # Temple Gladfelter Lobby | |
| ├── 2025-07-18-17-43-11/ # Temple Mazur Lobby | |
| ├── 2025-11-10-15-53-51/ # HKU Chow Yei Ching 4th floor # HKU environments (WLR-716 + RPLIDAR) | |
| ├── 2025-11-11-21-27-17/ # HKU Centennial Campus Lobby | |
| └── 2025-11-18-22-13-37/ # HKU Jockey Club 3rd floo | |
| ``` | |
| --- | |
| ## Semantic2D Segmentation Data Folder Structure | |
| ### Temple University Environments (Hokuyo UTM-30LX-EW) | |
| Folders: `2024-04-04-*` and `2024-04-11-*` (6 environments) | |
| ``` | |
| 2024-04-04-12-16-41/ # Engineering Lobby | |
| ├── train.txt # Training split filenames (70%) | |
| ├── dev.txt # Validation split filenames (10%) | |
| ├── test.txt # Test split filenames (20%) | |
| │ | |
| ├── scans_lidar/ # LiDAR range data | |
| │ └── *.npy # Shape: (1081,) - range values in meters | |
| │ | |
| ├── intensities_lidar/ # LiDAR intensity data | |
| │ └── *.npy # Shape: (1081,) - intensity values | |
| │ | |
| ├── line_segments/ # Extracted line features | |
| │ └── *.npy # Line segments [x1,y1,x2,y2] per line | |
| │ | |
| ├── positions/ # Robot poses | |
| │ └── *.npy # Shape: (3,) - [x, y, yaw] in map frame | |
| │ | |
| ├── velocities/ # Velocity commands | |
| │ └── *.npy # Shape: (2,) - [linear_x, angular_z] | |
| │ | |
| ├── semantic_label/ # Point-wise semantic labels | |
| │ └── *.npy # Shape: (1081,) - class ID per point | |
| │ | |
| ├── semantic_scan/ # Semantic scan visualization data | |
| │ └── *.npy | |
| │ | |
| ├── final_goals_local/ # Navigation final goals | |
| │ └── *.npy # Goal positions in local frame | |
| │ | |
| ├── sub_goals_local/ # Navigation sub-goals | |
| │ └── *.npy # Sub-goal waypoints | |
| │ | |
| └── 202404041210_eng_lobby_map/ # Environment map | |
| ├── 202404041210_eng_lobby.pgm # Occupancy grid map (PGM format) | |
| ├── 202404041210_eng_lobby.yaml # Map configuration | |
| └── map_labelme/ # Semantic labeled map | |
| ├── img.png # Original map image | |
| ├── label.png # Semantic label image | |
| ├── label_viz.png # Colored visualization | |
| ├── label_names.txt # Class name list | |
| └── map_labelme.json # LabelMe annotation file | |
| ``` | |
| ### HKU Environments (WLR-716 + RPLIDAR-S2) | |
| Folders: `2025-11-*` (3 environments) | |
| ``` | |
| 2025-11-10-15-53-51/ # Chow Yei Ching 4th floor | |
| ├── train.txt # Training split (70%) | |
| ├── dev.txt # Validation split (10%) | |
| ├── test.txt # Test split (20%) | |
| │ | |
| ├── # WLR-716 LiDAR data (811 points, 270 deg FOV) | |
| ├── scans_lidar_wlr716/ # Range data | |
| │ └── *.npy # Shape: (811,) | |
| ├── intensities_lidar_wlr716/ # Intensity data | |
| │ └── *.npy # Shape: (811,) | |
| ├── line_segments_wlr716/ # Line segments | |
| │ └── *.npy | |
| ├── semantic_label_wlr716/ # Semantic labels | |
| │ └── *.npy # Shape: (811,) | |
| │ | |
| ├── # RPLIDAR-S2 data (1972 points, 360 deg FOV) | |
| ├── scans_lidar_rplidar/ # Range data | |
| │ └── *.npy # Shape: (1972,) | |
| ├── intensities_lidar_rplidar/ # Intensity data | |
| │ └── *.npy # Shape: (1972,) | |
| ├── line_segments_rplidar/ # Line segments | |
| │ └── *.npy | |
| ├── semantic_label_rplidar/ # Semantic labels | |
| │ └── *.npy # Shape: (1972,) | |
| │ | |
| ├── # Shared data (same for both sensors) | |
| ├── positions/ # Robot poses [x, y, yaw] | |
| │ └── *.npy # Shape: (3,) | |
| ├── velocities/ # Velocity commands [vx, wz] | |
| │ └── *.npy # Shape: (2,) | |
| │ | |
| └── 202511101415_cyc_4th_map/ # Environment map | |
| ├── *.pgm # Occupancy grid | |
| ├── *.yaml # Map configuration | |
| └── map_labelme/ # Semantic labels | |
| └── ... | |
| ``` | |
| --- | |
| ## Data Format Details | |
| ### LiDAR Scan Data (`.npy`) | |
| ```python | |
| import numpy as np | |
| # Load range data | |
| scan = np.load('scans_lidar/0000001.npy') # Shape: (N,) where N = num_points | |
| # Hokuyo: N=1081, angle_min=-135°, angle_max=135° | |
| # WLR-716: N=811, angle_min=-135°, angle_max=135° | |
| # RPLIDAR-S2: N=1972, angle_min=-180°, angle_max=180° | |
| # Convert to Cartesian coordinates | |
| angles = np.linspace(angle_min, angle_max, num=N) | |
| x = scan * np.cos(angles) | |
| y = scan * np.sin(angles) | |
| ``` | |
| ### Semantic Labels (`.npy`) | |
| ```python | |
| # Load semantic labels | |
| labels = np.load('semantic_label/0000001.npy') # Shape: (N,) | |
| # Each value is a class ID (0-9): | |
| # 0: Background, 1: Chair, 2: Door, 3: Elevator, 4: Person, | |
| # 5: Pillar, 6: Sofa, 7: Table, 8: Trash bin, 9: Wall | |
| ``` | |
| ### Robot Pose (`.npy`) | |
| ```python | |
| # Load robot pose | |
| pose = np.load('positions/0000001.npy') # Shape: (3,) | |
| x, y, yaw = pose[0], pose[1], pose[2] # Position and orientation in map frame | |
| ``` | |
| ### Dataset Splits (`train.txt`, `dev.txt`, `test.txt`) | |
| ``` | |
| # Each line contains a .npy filename | |
| 0001680.npy | |
| 0007568.npy | |
| 0009269.npy | |
| ... | |
| ``` | |
| --- | |
| ## Semantic2D labeling: | |
| ### 1. Data Collection (`dataset_collection.py`) | |
| ROS node for collecting data from the robot during teleoperation. | |
| **Key Features:** | |
| - Subscribes to LiDAR scans (`/scan, /wj716_base/scan`, `/rplidar_base/scan`) | |
| - Records range, intensity, line segments, positions, velocities | |
| - Saves data at 20 Hz as `.npy` files | |
| ### 2. Semi-Automated Labeling Framework (SALSA) | |
| **Algorithm:** | |
| 1. Load pre-labeled semantic map (from LabelMe) | |
| 2. For each LiDAR scan: | |
| - Extract line features for robust alignment | |
| - Apply ICP to refine robot pose | |
| - Project LiDAR points to map frame | |
| - Match points to semantic labels via pixel lookup | |
| - Points in free space labeled as "Person" (dynamic objects) | |
| **Configuration (modify in script):** | |
| ```python | |
| DATASET_ODIR = "/path/to/raw/data" | |
| MAP_ORIGIN = np.array([-82.0, -71.6, 0.0]) # From map YAML | |
| MAP_RESOLUTION = 0.025 | |
| POINTS = 811 # Number of LiDAR points | |
| AGNLE_MIN = -2.356 # Min angle (radians) | |
| AGNLE_MAX = 2.356 # Max angle (radians) | |
| ``` | |
| --- | |
| ## Environments Summary | |
| | Environment | Location | Folder | LiDAR | | |
| |-------------|----------|--------|-------| | |
| | Engineering Lobby | Temple | `2024-04-04-12-16-41` | Hokuyo | | |
| | Engineering 6th Floor | Temple | `2024-04-04-13-48-45` | Hokuyo | | |
| | Engineering 9th Floor | Temple | `2024-04-04-14-18-32` | Hokuyo | | |
| | Engineering 8th Floor | Temple | `2024-04-04-14-50-01` | Hokuyo | | |
| | Engineering 4th Floor | Temple | `2024-04-11-14-37-14` | Hokuyo | | |
| | Engineering Corridor | Temple | `2024-04-11-15-24-29` | Hokuyo | | |
| | SERC Lobby | Temple | `2025-07-08-13-32-08` | Hokuyo | | |
| | Gladfelter Lobby | Temple | `2025-07-08-14-22-44` | Hokuyo | | |
| | Mazur Lobby | Temple | `2025-07-18-17-43-11` | Hokuyo | | |
| | Chow Yei Ching 4th Floor | HKU | `2025-11-10-15-53-51` | WLR-716/RPLIDAR | | |
| | Centennial Campus Lobby | HKU | `2025-11-11-21-27-17` | WLR-716/RPLIDAR | | |
| | Jockey Club 3rd Floor | HKU | `2025-11-18-22-13-37` | WLR-716/RPLIDAR | | |
| --- | |
| ## ROS Bag Contents | |
| The original ROS bags in `rosbags/` contain: | |
| | Topic | Message Type | Description | | |
| |-------|--------------|-------------| | |
| | `/scan` or `/*/scan` | `sensor_msgs/LaserScan` | LiDAR scans | | |
| | `/robot_pose` | `geometry_msgs/PoseStamped` | Robot pose | | |
| | `/cmd_vel` | `geometry_msgs/Twist` | Velocity commands | | |
| | `/tf` | `tf2_msgs/TFMessage` | Transforms | | |
| | `/map` | `nav_msgs/OccupancyGrid` | Occupancy map | | |
| | `/camera/*` | `sensor_msgs/Image` | RGB/Depth images | | |
| | `/odom` | `nav_msgs/Odometry` | Odometry | | |
| --- | |
| ## Related Resources | |
| - **SALSA (Dataset and Labeling Framework):** https://github.com/TempleRAIL/semantic2d | |
| - **S3-Net (Segmentation Algorithm):** https://github.com/TempleRAIL/s3_net | |
| - **Semantic CNN Navigation:** https://github.com/TempleRAIL/semantic_cnn_nav | |
| - **Dataset Zenodo:** DOI: 10.5281/zenodo.18350696 | |
| --- | |
| ## Citation | |
| ```bibtex | |
| @article{xie2026semantic2d, | |
| title={Semantic2D: Enabling Semantic Scene Understanding with 2D Lidar Alone}, | |
| author={Xie, Zhanteng and Pan, Yipeng and Zhang, Yinqiang and Pan, Jia and Dames, Philip}, | |
| journal={arXiv preprint arXiv:2409.09899}, | |
| year={2026} | |
| } | |
| ``` | |
| --- | |
| ## License | |
| Please refer to the associated paper and GitHub repository for licensing information. | |
| ## Contact | |
| - Zhanteng Xie: zhanteng@hku.hk | |