--- license: mit task_categories: - feature-extraction language: - en tags: - perception - cooperative - collective - autonomous_driving pretty_name: >- CoopScenes: Multi-Scene Infrastructure and Vehicle Data for Advancing Collective Perception in Autonomous Driving viewer: false --- # 📚 CoopScenes Dataset ![CoopScenes Overview Slide](docu/Coop-Scenes.png) ## 🌟 Overview **CoopScenes** is a comprehensive multi-scene dataset designed for research in collective perception, sensor registration, and cooperative systems in urban environments. It features synchronized data from an ego vehicle and infrastructure-mounted sensors across real-world scenarios, including public transport stops, construction sites, and expressways. - **Duration**: \~104 minutes at 10 Hz → \~62,000 frames (\~527 GB in `.4mse` format) - **Synchronization**: Sub-frame alignment with \~2.3 ms mean offset - **Scenarios**: Collected across multiple cities in the Stuttgart metropolitan area 🔜 More information: [coopscenes.github.io](https://coopscenes.github.io/) --- ## 🛠️ Sensor Setup & Annotations The dataset features time-synchronized and spatially calibrated sensors on both the ego vehicle and roadside infrastructure (towers), including: - LiDAR (Ouster OS2, Blickfeld Qb2) - Multi-camera systems - GNSS and IMU - Object annotations (automatically generated) - Privacy-preserving anonymization using [**BlurScene**](https://pypi.org/project/BlurScene/) --- ## ✅ Key Features | Feature | Description | | ------------------------------------- | ----------------------------------------------- | | 62,000 Frames at 10 Hz | \~104 minutes of data | | High-precision synchronization | Mean offset \~2.3 ms | | Vehicle-to-infrastructure setup | Multi-agent cooperative perception | | Diverse scenarios | Public transport, construction, highways | | Automatic annotations & anonymization | Faces and license plates blurred with BlurScene | --- ## 📦 Installation & Usage Install the CoopScenes Python package: ```bash pip install coopscenes ``` Then load and explore the dataset using the included developer tools: ```python from coopscenes import DataRecord # open a specific .4mse-file record = DataRecord("/content/example_record_1.4mse") # use first frame from record frame = record[0] frame.vehicle.cameras.STEREO_LEFT.show() print(frame.tower.lidars.UPPER_PLATFORM.points.shape) # example access ``` Additional tooling, documentation, and format specs can be found in the [developer toolkit](https://pypi.org/project/coopscenes/). --- ## 🚀 Google Colab (Quickstart) Get started with the data using our ready-to-run [**Colab notebook**](https://coopscenes.github.io/#colab). It demonstrates: - Reading `.4mse` files - Visualizing sensor data - Performing simple analysis tasks --- ## 📄 Citation Please cite the following if you use CoopScenes in your work (IEEE IV '25 publish is following): ```bibtex @misc{vosshans2025aeifdatacollectiondataset, author = {Marcel Vosshans and Alexander Baumann and Matthias Drueppel and Omar Ait-Aider and Youcef Mezouar and Thao Dang and Markus Enzweiler}, title = {CoopScenes: Multi-Scene Infrastructure and Vehicle Data for Advancing Collective Perception in Autonomous Driving}, url = {https://arxiv.org/abs/2407.08261}, year = {2025}, } ``` --- ## 🔐 License The dataset is released under the **MIT License**. Refer to the LICENSE file for details.