|
|
--- |
|
|
license: cc-by-4.0 |
|
|
task_categories: |
|
|
- robotics |
|
|
- image-segmentation |
|
|
- image-text-to-text |
|
|
--- |
|
|
|
|
|
# OSMa-Bench Dataset |
|
|
|
|
|
[**Project Page**](https://be2rlab.github.io/OSMa-Bench/) | [**Paper**](https://huggingface.co/papers/2503.10331) | [**Code**](https://github.com/be2rlab/OSMa-Bench) |
|
|
|
|
|
[](https://be2rlab.github.io/OSMa-Bench/) |
|
|
|
|
|
OSMa-Bench (Open Semantic Mapping Benchmark) dataset is a fully automatically generated dataset for evaluating the robustness of open semantic mapping and segmentation systems under varying indoor lighting conditions and robot movement dynamics. This dataset is part of [OSMa-Bench](https://be2rlab.github.io/OSMa-Bench/) pipeline. |
|
|
|
|
|
## Dataset Summary |
|
|
|
|
|
This dataset provides simulated RGB-D and semantically annotated posed sequences for evaluation of semantic mapping and segmentation, with a particular focus on handling dynamic lighting—a critical but often overlooked factor in existing benchmarks. It also includes a collection of automatically generated question–answer pairs across multiple categories to support the evaluation of scene-graph–based reasoning, offering a task-driven measure of how well a system’s reconstructed scene captures semantic relationships between objects. |
|
|
|
|
|
The data is built upon two base datasets: |
|
|
- **ReplicaCAD**: 22 scenes with 4 lighting configurations and a velocity modifier. |
|
|
- **Habitat Matterport 3D (HM3D)**: 8 scenes with 2 lighting configurations and a velocity modifier. |
|
|
|
|
|
## Installation |
|
|
|
|
|
We offer two versions of the dataset: one with separate files and one as a single compressed archive. Use the following command to download the separated files (may be slow): |
|
|
|
|
|
```bash |
|
|
git xet install |
|
|
git clone https://huggingface.co/datasets/warmhammer/OSMa-Bench_dataset -b main |
|
|
``` |
|
|
and this command to download the compressed version (faster one): |
|
|
|
|
|
```bash |
|
|
git xet install |
|
|
git clone https://huggingface.co/datasets/warmhammer/OSMa-Bench_dataset -b compressed |
|
|
unzip data.zip |
|
|
``` |
|
|
|
|
|
## Data Configurations |
|
|
|
|
|
The dataset includes the following configurations for the ReplicaCAD and HM3D scenes: |
|
|
|
|
|
| Configuration | Description | |
|
|
| :--- | :--- | |
|
|
| `baseline` | Static, non-uniformly distributed light sources (ReplicaCAD only) | |
|
|
| `dynamic_lighting` | Lighting conditions change along the robot's path (ReplicaCAD only) | |
|
|
| `nominal_lights` | The mesh itself emits light without added light sources | |
|
|
| `camera_light` | An extra directed light source is attached to the camera | |
|
|
| `velocity` | Sequences recorded at doubled nominal velocity | |
|
|
|
|
|
--- |
|
|
|
|
|
## Data Structure |
|
|
|
|
|
The dataset provides structured data for each scene, suitable for tasks like 3D scene understanding, visual question answering, and robotics. Each scene contains the following components: |
|
|
|
|
|
| Component | Description | Format / Example | |
|
|
| ------------------------- | ------------------------------------------------------------------------------------------------- | --------------------------------------| |
|
|
| **RGB Images** | Standard color images captured from different camera viewpoints. | `frame000000.jpg`, ... | |
|
|
| **Depth Images** | Depth maps aligned with RGB images. Each pixel encodes depth in meters. | `depth000000.png`, ... | |
|
|
| **Semantic Masks** | Pixel-wise semantic segmentation labels. Each pixel corresponds to a semantic class ID. | `semantic000000.png`, ... | |
|
|
| **Camera Trajectories** | Flattened 4×4 transformation matrices representing camera poses for each frame. | `traj.txt` (one 4×4 matrix per line) | |
|
|
| **Question-Answer Pairs** | Validated question-answer pairs related to the scene, optionally associated with specific frames. | `validated_questions.json` | |
|
|
|
|
|
|
|
|
## VQA Question Categories |
|
|
The dataset includes a automatically generated answer-question pairs with the following question types: |
|
|
1. **Binary General** – Yes/No questions about the presence of objects and general scene characteristics |
|
|
*Example:* `Is there a blue sofa?` |
|
|
|
|
|
2. **Binary Existence-Based** – Yes/No questions designed to track false positives by querying non-existent objects |
|
|
*Example:* `Is there a piano?` |
|
|
|
|
|
3. **Binary Logical** – Yes/No questions with logical operators such as AND/OR |
|
|
*Example:* `Is there a chair AND a table?` |
|
|
|
|
|
4. **Measurement** – Questions requiring numerical answers related to object counts or scene attributes |
|
|
*Example:* `How many windows are present?` |
|
|
|
|
|
5. **Object Attributes** – Queries about object properties, including color, shape, and material |
|
|
*Example:* `What color is the door?` |
|
|
|
|
|
6. **Object Relations (Functional)** – Questions about functional relationships between objects |
|
|
*Example:* `Which object supports the table?` |
|
|
|
|
|
7. **Object Relations (Spatial)** – Queries about spatial placement of objects within the scene |
|
|
*Example:* `What is in front of the staircase?` |
|
|
|
|
|
8. **Comparison** – Questions that compare object properties such as size, color, and position |
|
|
*Example:* `Which is taller: the bookshelf or the lamp?` |
|
|
|
|
|
|
|
|
## Citation |
|
|
|
|
|
Using OSMa-Bench dataset in your research? Please cite following paper: [OSMa-Bench arxiv](https://arxiv.org/abs/2503.10331). |
|
|
|
|
|
```bibtex |
|
|
@inproceedings{popov2025osmabench, |
|
|
title = {OSMa-Bench: Evaluating Open Semantic Mapping Under Varying Lighting Conditions}, |
|
|
author = {Popov, Maxim and Kurkova, Regina and Iumanov, Mikhail and Mahmoud, Jaafar and Kolyubin, Sergey}, |
|
|
booktitle = {2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, |
|
|
year = {2025} |
|
|
} |
|
|
``` |