Datasets:
license: mit
task_categories:
- video-classification
- text-classification
- object-detection
language:
- en
tags:
- pedsimbench
- pedestrian-simulation
- autonomous-vehicles
- pedestrian-behavior
- behavior-prediction
- temporal-annotation
- video-understanding
pretty_name: 'PedSimBench - Pedestrian Simulation Benchmark Dataset '
size_categories:
- 1K<n<10K
PedSimBench: Pedestrian Simulation Benchmark Dataset
Dataset Description
PedSimBench is a comprehensive collection of real-world video annotations focused on pedestrian behavior in traffic scenarios. This dataset is specifically designed for autonomous vehicle research, particularly for understanding pedestrian decision-making, behavioral patterns, and critical interaction scenarios between pedestrians and vehicles.
The dataset contains frame-level annotations of pedestrian behaviors, vehicle responses, environmental contexts, and behavioral archetypes extracted from real-world traffic videos, making it invaluable for training and evaluating pedestrian prediction models, risk assessment systems, and autonomous vehicle decision-making algorithms.
Use Cases
This dataset supports multiple research and development applications:
- Pedestrian Behavior Prediction: Train models to anticipate pedestrian actions in traffic scenarios
- Risk Assessment: Develop systems to evaluate collision risk based on pedestrian and vehicle behaviors
- Autonomous Vehicle Decision-Making: Improve AV response strategies to various pedestrian behaviors
- Traffic Safety Analysis: Study patterns and factors contributing to pedestrian-vehicle interactions
- Behavioral Archetype Recognition: Classify pedestrian types (jaywalkers, distracted pedestrians, etc.)
- Multi-modal Learning: Combine with video data for vision-based behavior understanding
Dataset Structure
Data Fields
Each row in the dataset represents a single annotated temporal segment with the following fields:
| Column | Type | Description |
|---|---|---|
id |
integer | Unique identifier for each annotation |
video_path |
string | YouTube URL of the source video |
start_frame |
integer | Starting frame number of the annotated segment |
end_frame |
integer | Ending frame number of the annotated segment |
pedestrian_behavior_tags |
string | Comma-separated tags describing pedestrian behaviors |
vehicle_tags |
string | Comma-separated tags describing vehicle/ego behaviors |
environment_tags |
string | Comma-separated tags describing scene and environmental conditions |
archetypes |
string | Comma-separated behavioral archetype classifications |
Source Data
The dataset is derived from real-world traffic videos, primarily sourced from YouTube, capturing authentic pedestrian-vehicle interactions across various:
- Geographic locations
- Traffic conditions
- Time periods (day/night)
- Road types and configurations
- Weather conditions
Annotation Process
Each video segment was manually annotated by trained annotators who identified:
- Temporal boundaries (start and end frames) of interaction events
- Pedestrian behaviors observed during the segment
- Vehicle responses and actions
- Environmental and contextual factors
- Behavioral archetype classifications
Annotation Quality
- Frame-level precision: Annotations specify exact frame numbers for temporal accuracy
- Multi-label approach: Multiple tags can be assigned to capture complex behaviors
- Contextual completeness: Each annotation includes pedestrian, vehicle, environment, and archetype information
Limitations
- Video Quality: Source videos vary in resolution, frame rate, and quality
- Annotation Subjectivity: Some behavioral interpretations may contain subjective elements
- Geographic Bias: Dataset may over-represent certain regions or traffic cultures
- Scenario Coverage: May not capture all possible pedestrian-vehicle interaction types
- Temporal Resolution: Frame-level annotations dependent on source video FPS
Technical Specifications
- Frame Rate: Typically 30 FPS (varies by source video)
- Annotation Format: CSV with comma-separated multi-label tags
- Video Access: Via YouTube URLs (requires internet connection and YouTube API access)
- Recommended Processing: Frame extraction from videos using provided frame numbers