The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset
π Welcome to the CVPR 2026 Auto-Annotation Challenge, organized under the AutoExpert workshop.
Inspired by recent advancements in foundation models and the critical bottleneck of data annotation in autonomous driving, this challenge introduces a novel paradigm: Auto-Annotation from Expert-Crafted Guidelines.
Instead of training on massive annotated 3D datasets, participants are required to develop models that can comprehend nuanced textual annotator instructions and a few 2D visual examples to predict 3D bounding boxes in LiDAR data. Crucially, no 3D LiDAR visual references are provided in the training phase.
This dataset is built upon the foundational PandaSet dataset, specifically adapted for this multimodal few-shot/zero-shot 3D detection task.
π 1. Directory Structure
The repository is organized as follows to support the auto-annotation task. It contains the expert guidelines, the 2D few-shot training examples, and the multimodal testing inputs:
cvpr-workshop-challenge-annoexpert-public/
βββ annotator_instructions/ # Textual guidelines for each category
β βββ instructions.pdf # Detailed definition & rules per class
βββ seq/ # Shared sequence data (loaded via PandaSet Devkit)
β βββ {seq_id}/ # Standard PandaSet sequence structure
β βββ lidar/ # Raw LiDAR point clouds
βββ test/ # Multimodal Testing Set (Inputs only)
β βββ images/ # Multi-view test images (200 target frames)
βββ train/ # Few-Shot 2D Examples (Federated Annotation)
β βββ images/ # Exemplar images for each category
β βββ 2D_annotations/ # 2D bounding box annotations
βββ val/ # Validation Set
β βββ 2D_annotations/ # 2D bounding box annotations for validation frames
β βββ 3D_annotations/ # Ground truth 3D bounding boxes for local evaluation
β βββ images/ # Multi-view validation images
βββ .gitattributes # Git LFS configuration
βββ README.md # Dataset documentation
π 2. Task Formulation & Data Formats
2.1 Expert Guidelines & Few-Shot Examples (Training)
Participants must rely on the provided guidelines and few-shot examples to understand the 25 target categories.
Annotator Instructions (
annotator_instructions/): Contains the expert-crafted definitions and rules for annotating each class (e.g., whether to include a rider within a bicycle bounding box).2D Visual Examples (
train/):Naming Convention (
images/):{category_name}&{seq_id}_{camera_name}_{frames_id}.[ext]Federated Annotation (
2D_annotations/): Note that these 2D examples are annotated in a federated way. In a given image, only objects belonging to the target{category_name}are annotated, while objects of other classes are intentionally ignored.- Label Format (
.txt):x y w h cls(wherex,yare the left-top coordinates,w,hare width and height).
- Label Format (
2.2 Validation Set (val/)
To help participants validate their models before submitting to the evaluation server, a validation set on 8 specific keyframes is provided with 2D and 3D annotations.
Images (
val/images/): Multi-view images for the validation frames.2D Annotations (
val/2D_annotations/): Comprehensive 2D bounding box annotations.3D Annotations (
val/3D_annotations/): Ground truth 3D bounding boxes. Since training data lacks 3D references, you can use this set to locally evaluate your model's 3D detection metrics (mAP, NDS).LiDAR & Calibration (
seq/): Crucially, the corresponding raw LiDAR point clouds and sensor poses/intrinsics for these validation frames must be loaded via the PandaSet Devkit from the shared rootseq/{seq_id}/directory.
2.3 Test Sensor Data (Evaluation)
The evaluation focuses on 192 specific keyframes in the test set.
Images (
test/images/): Contains multi-view test images (inputs only).LiDAR & Calibration (
seq/): Just like the validation set, the raw LiDAR sweeps and necessary calibration metadata for the test frames are provided within the shared rootseq/{seq_id}/directory.Data Access via Devkit: You must use the official PandaSet Devkit API to read point clouds and sensor calibration. For example, use
sequence.camera[camera_name].poses[frame_idx]for extrinsics,sequence.camera[camera_name].intrinsicsfor intrinsics, andsequence.lidar[frame_idx]for point clouds.
π 3. Submission Format
For the evaluation server to process your predictions, you must submit your 3D detection results in a highly specific JSON format.
Participants must generate a single submission.json file containing all predictions for the 200 test images. The JSON file should contain a list of dictionaries, where each dictionary represents a single predicted 3D bounding box.
JSON Structure Example:
[
{
"seq_id": "001",
"frame_idx": 29,
"frame_token": "001_front_left_camera_000029",
"label": "Car",
"score": 0.8,
"box_3d": [10.5, -3.2, -1.0, 4.5, 1.8, 1.5, 0.12]
},
{
"seq_id": "001",
"frame_idx": 29,
"frame_token": "001_front_left_camera_000029",
"label": "Pedestrian",
"score": 0.9,
"box_3d": [12.1, -1.5, -0.8, 0.5, 0.6, 1.7, 0.05]
}
]
Field Definitions:
- seq_id (String): The sequence identifier from the PandaSet dataset (e.g., "001").
- frame_idx (Integer): The frame index within the sequence (e.g., 29).
- frame_token (String): The unique identifier for the specific camera frame, formatted as {seq_id}{camera_name}{frame_idx}.
- label (String): The predicted category name. It must exactly match one of the 25 official classes.
- score (Float): The confidence score of the prediction (between 0.0 and 1.0).
- box_3d (List of Floats): The 3D bounding box parameters in the LiDAR coordinate system. It must contain exactly 7 values: [x, y, z, l, w, h, yaw].
- x, y, z: The 3D center coordinates.
- l, w, h: The length, width, and height of the box.
- yaw: The orientation/yaw angle in radians.
Please ensure your final submission is a single valid JSON file named submission.json.
π·οΈ 4. Class Categories
The dataset includes 25 distinct object categories, requiring models to handle diverse traffic participants and nuanced object definitions as specified in the Expert Guidelines.
| Vehicle & Transport | Vulnerable Road Users (VRU) | Infrastructure & Obstacles |
|---|---|---|
Car |
Pedestrian |
Temporary_Construction_Barriers |
Pickup_Truck |
Pedestrian_with_Object |
Cones |
Medium-sized_Truck |
Bicycle |
Signs |
Semi-truck |
Motorcycle |
Rolling_Containers |
Bus |
Motorized_Scooter |
Pylons |
Tram_or_Subway |
Personal_Mobility_Device |
Road_Barriers |
Emergency_Vehicle |
Animals-Other |
Construction_Signs |
Other_Vehicle-Construction_Vehicle |
Towed_Object |
|
Other_Vehicle-Uncommon |
||
Other_Vehicle-Pedicab |
Note: Participants must adhere to the specific definitions for each class (e.g., the inclusion of riders in the Bicycle box) as outlined in the provided annotator instructions.
π 5. Getting Started
We highly recommend using huggingface-cli or git lfs to download the dataset due to the large size of the high-resolution images.
# Ensure Git LFS is installed
git lfs install
# Clone the dataset repository
git clone [https://huggingface.co/datasets/YOUR_ORG_NAME/cvpr26-auto-annotation-public](https://huggingface.co/datasets/YOUR_ORG_NAME/cvpr26-auto-annotation-public)
# Setup PandaSet Devkit for LiDAR processing
pip install pandaset
Note: Refer to the Official PandaSet Devkit GitHub for instructions on loading LiDAR sweeps based on image frame IDs.
βοΈ 6. License & Citation
This dataset is built upon PandaSet and is subject to the PandaSet License Terms.
If you use this benchmark in your research or challenge submission, please cite our CVPR 2026 Workshop, the AutoExpert baseline paper, and the original PandaSet paper:
@misc{pandaset2021,
title = {PandaSet: Advanced Sensor Suite Dataset for Autonomous Driving},
author = {Xiao, Peng and Shao, Zili and Hao, Shaoyu and others},
booktitle = {IEEE International Intelligent Transportation Systems Conference (ITSC)},
year = {2021},
url = {https://pandaset.org},
}
- Downloads last month
- 7,449