Sherry0-1's picture
Update README.md
8644b9a verified
![image/png](https://cdn-uploads.huggingface.co/production/uploads/68ba6d05839fc7bea38e872b/A22iPKAZ5jOlEy4D7yHGr.png)
<div style="text-align: center;">
<span style="color: #6E4DF7; font-size: 24px;font-weight: bold;">全球最大的智能座舱多模态开源高质量数据集来啦!</span>
</div>
## <span style="color:#6E4DF7;">一. 数据集摘要 (Dataset Summary)</span>
「CyberData塞塔」智能座舱用户行为数据集是一个专为加速智能座舱感知算法开发而设计的高质量、程序化生成的图像数据集。随着 C-NCAP、EU GSR 等全球汽车安全法规对驾驶员监控系统 (DMS) 和乘客监控系统 (OMS) 提出更高要求,安全、合规、多样化的训练数据变得至关重要。本数据集通过合成方式,旨在解决真实世界数据采集面临的隐私风险、高昂成本和长尾场景覆盖不足等核心挑战。
该数据集包含 5,000 张 由 XAI Lab 自主研发的数据集生成引擎合成的高保真座舱内用户行为图像,每张图像都附带丰富的、100% 精确的标注信息。
核心特点:
- <span style="color:#6E4DF7;">丰富的场景多样性</span>: 涵盖不同年龄、性别、种族和衣着风格的虚拟人模型,以及多种驾驶与乘坐行为(如使用手机、喝水、疲劳、手势)和面部表情。
- <span style="color:#6E4DF7;">专为座舱感知优化</span>: 数据集可直接用于智能座舱端侧视觉模型,尤其是 DMS/OMS 算法的训练、微调与验证,帮助模型精准理解座舱内复杂的交互与状态。
- <span style="color:#6E4DF7;">安全合规的数据源</span>: 所有图像均为合成数据,从根本上规避了采集真实用户数据时涉及的隐私和肖像权问题,为算法开发提供了安全、可控、可扩展的数据基础。
- <span style="color:#6E4DF7;">精确的程序化标注</span>: 所有标签(如边界框、行为类别、用户属性)均在数据生成时同步产生,保证了标注的零错误率和高度一致性,消除了人工标注的主观性和不确定性。
## <span style="color:#6E4DF7;">二. 支持的任务与排行榜 (Supported Tasks and Leaderboards)</span>
该数据集旨在推动智能座舱内计算机视觉技术的进步,尤其是在驾驶员与乘客的主动安全和智能交互领域。<br>
<br>
**主要支持任务**
- <span style="color:#6E4DF7;">驾驶员/乘客行为识别 (Behavior Recognition)</span>: 对座舱内人员的关键行为进行分类。此任务对于识别分心驾驶(如玩手机、抽烟)和危险行为至关重要。
- 任务示例: 图像级多标签分类,识别图中出现的 using_phone, smoking, drinking, yawning 等行为。
- <span style="color:#6E4DF7;">驾驶员状态监测 (Driver State Monitoring)</span>: 评估驾驶员的生理和精神状态,是预防疲劳驾驶事故的核心技术。
- 任务示例: 分类驾驶员是否处于 drowsy (疲劳), distracted (分心) 或 attentive (专注) 状态。
- <span style="color:#6E4DF7;">目标检测 (Object Detection)</span>: 在图像中定位关键对象及其位置,为后续的行为分析和交互提供基础。
- 任务示例: 检测图像中的 face (人脸), hand (手), phone (手机) 等物体的边界框。
- <span style="color:#6E4DF7;">属性识别 (Attribute Recognition):</span> 识别用户的基本人口统计学特征,可用于个性化座舱设置。
- 任务示例: 分类用户的 age_group (年龄段) 和 gender (性别)。
**排行榜 (Leaderboards)[此部分为未来计划]**<br>
我们计划未来举办挑战赛,并在此设立排行榜,展示在标准测试集上表现最佳的模型。
评估指标将可能包括:
- <span style="color:#6E4DF7;">行为识别</span>: mAP (mean Average Precision)
- <span style="color:#6E4DF7;">目标检测</span>: mAP@.50
- <span style="color:#6E4DF7;">状态监测</span>: F1-Score, Accuracy
# <span style="color:#6E4DF7;">三. 数据集结构 (Dataset Structure) </span>
数据集遵循清晰、直观的目录结构,便于访问和解析。所有图像均存放在 images/ 目录下,对应的 JSON 标注文件存放在 annotations/ 目录下。
数据实例 (Data Instances)
每个数据点包含一张图像及其对应的元数据。 <span style="color:#6E4DF7;">图像以 .jpg 格式存储,元数据以 .json 格式存储,文件名一一对应。</span>
以下是一个数据实例的标注文件 (<image_id>.json) <br>
示例: { "image_path": "images/0001.jpg", "image_id": "0001", "attributes": { "age_group": "25-35", "gender": "male", "race": "asian", "clothing": "t-shirt" }, "annotations": [ { "label": "face", "bbox": [250, 150, 450, 350], "expression": "neutral" }, { "label": "behavior", "class": "using_phone", "bbox": [300, 400, 500, 600] }, { "label": "drowsiness", "class": "none", "confidence": 0.98 } ] }
Here’s the full English translation:
<div style="text-align: center;">
<span style="color: #6E4DF7; font-size: 24px;font-weight: bold;">The world’s largest open-source, high-quality multimodal dataset for smart-cockpit perception is here!</span>
</div>
## <span style="color:#6E4DF7;">一. Dataset Summary</span>
“CyberData Saita” is a high-fidelity, programmatically generated image dataset created to accelerate the development of in-cabin perception algorithms. With global automotive safety regulations such as C-NCAP and EU GSR imposing stricter requirements on Driver-Monitoring Systems (DMS) and Occupant-Monitoring Systems (OMS), safe, compliant and diverse training data has become critical. By leveraging synthetic generation, this dataset overcomes key challenges faced in real-world data collection—namely privacy risks, high costs and insufficient coverage of long-tail scenarios.
The dataset contains 5,000 photo-realistic cabin images synthesized by XAI Lab’s proprietary generation engine. Every image is accompanied by rich, 100 % accurate annotations.
Key features
<span style="color:#6E4DF7;">Rich scene diversity</span>: spans virtual humans of different ages, genders, ethnicities and clothing styles, together with a wide range of driving/riding behaviors (e.g., phone use, drinking, fatigue, gestures) and facial expressions.
<span style="color:#6E4DF7;">Optimized for in-cabin perception</span>: ready for on-device vision models in smart cockpits, especially for training, fine-tuning and validation of DMS/OMS algorithms that must precisely interpret complex interactions and occupant states.
<span style="color:#6E4DF7;">Safe & compliant data source</span>: all images are purely synthetic, eliminating privacy and portrait-right issues inherent in real-user data collection, and providing a secure, controllable and scalable data foundation.
<span style="color:#6E4DF7;">Accurate programmatic labels</span>: all tags (bounding boxes, behavior classes, user attributes) are generated synchronously with the images, yielding zero annotation errors and perfect consistency while removing human subjectivity.
## <span style="color:#6E4DF7;">二. Supported Tasks & Leaderboards </span>
This dataset aims to advance computer-vision technologies inside the cabin, particularly for active safety and intelligent interaction of drivers and passengers.
Main tasks
<span style="color:#6E4DF7;">Behavior Recognition</span>: classify critical occupant behaviors (e.g., using_phone, smoking, drinking, yawning) via image-level multi-label classification—vital for detecting distracted or dangerous driving.
<span style="color:#6E4DF7;">Driver State Monitoring</span>: assess physiological/mental state (drowsy, distracted, attentive) to help prevent fatigue-related accidents.
<span style="color:#6E4DF7;">Object Detection</span>: localize key objects such as faces, hands and phones via bounding-box regression.
<span style="color:#6E4DF7;">Attribute Recognition</span>: identify basic demographic attributes (age_group, gender) to enable personalized cabin settings.
Leaderboards (planned)
We intend to host future challenges and publish leaderboards that rank the best-performing models on a standardized test set. Evaluation metrics will likely include:
• Behavior recognition: mAP (mean Average Precision)
• Object detection: mAP@0.50
• State monitoring: F1-Score, Accuracy
## <span style="color:#6E4DF7;">三. Dataset Structure</span>
The dataset follows a clear, intuitive directory layout for easy access and parsing. All images are stored in images/; the corresponding JSON annotations reside in annotations/.
Data instance
<span style="color:#6E4DF7;">Each sample consists of one .jpg image and a matching .json file with identical filenames. Below is an example annotation (<image_id>.json):</span>
{
"image_path": "images/0001.jpg",
"image_id": "0001",
"attributes": {
"age_group": "25-35",
"gender": "male",
"race": "asian",
"clothing": "t-shirt"
},
"annotations": [
{
"label": "face",
"bbox": [250, 150, 450, 350],
"expression": "neutral"
},
{
"label": "behavior",
"class": "using_phone",
"bbox": [300, 400, 500, 600]
},
{
"label": "drowsiness",
"class": "none",
"confidence": 0.98
}
]
}
---
license: apache-2.0
size_categories:
- 1K<n<10K
---
## <span style="color:#6E4DF7;">联系方式:</span>
## <span style="color:#6E4DF7;">Cooperation Email: victorhu@xailab.cn; </span>
## <span style="color:#6E4DF7;">WeChat: VictorHu2022 </span>
![image/png](https://cdn-uploads.huggingface.co/production/uploads/68ba6d05839fc7bea38e872b/vOr4FX34lKGfAqFWujqWD.png)