PedSimBench / README.md
jub-aer's picture
update the readme
8215b94 verified
metadata
license: mit
task_categories:
  - video-classification
  - text-classification
  - object-detection
language:
  - en
tags:
  - pedsimbench
  - pedestrian-simulation
  - autonomous-vehicles
  - pedestrian-behavior
  - behavior-prediction
  - temporal-annotation
  - video-understanding
pretty_name: 'PedSimBench - Pedestrian Simulation Benchmark Dataset '
size_categories:
  - 1K<n<10K

PedSimBench: Pedestrian Simulation Benchmark Dataset

Dataset Description

PedSimBench is a comprehensive collection of real-world video annotations focused on pedestrian behavior in traffic scenarios. This dataset is specifically designed for autonomous vehicle research, particularly for understanding pedestrian decision-making, behavioral patterns, and critical interaction scenarios between pedestrians and vehicles.

The dataset contains frame-level annotations of pedestrian behaviors, vehicle responses, environmental contexts, and behavioral archetypes extracted from real-world traffic videos, making it invaluable for training and evaluating pedestrian prediction models, risk assessment systems, and autonomous vehicle decision-making algorithms.

Use Cases

This dataset supports multiple research and development applications:

  1. Pedestrian Behavior Prediction: Train models to anticipate pedestrian actions in traffic scenarios
  2. Risk Assessment: Develop systems to evaluate collision risk based on pedestrian and vehicle behaviors
  3. Autonomous Vehicle Decision-Making: Improve AV response strategies to various pedestrian behaviors
  4. Traffic Safety Analysis: Study patterns and factors contributing to pedestrian-vehicle interactions
  5. Behavioral Archetype Recognition: Classify pedestrian types (jaywalkers, distracted pedestrians, etc.)
  6. Multi-modal Learning: Combine with video data for vision-based behavior understanding

Dataset Structure

Data Fields

Each row in the dataset represents a single annotated temporal segment with the following fields:

Column Type Description
id integer Unique identifier for each annotation
video_path string YouTube URL of the source video
start_frame integer Starting frame number of the annotated segment
end_frame integer Ending frame number of the annotated segment
pedestrian_behavior_tags string Comma-separated tags describing pedestrian behaviors
vehicle_tags string Comma-separated tags describing vehicle/ego behaviors
environment_tags string Comma-separated tags describing scene and environmental conditions
archetypes string Comma-separated behavioral archetype classifications

Source Data

The dataset is derived from real-world traffic videos, primarily sourced from YouTube, capturing authentic pedestrian-vehicle interactions across various:

  • Geographic locations
  • Traffic conditions
  • Time periods (day/night)
  • Road types and configurations
  • Weather conditions

Annotation Process

Each video segment was manually annotated by trained annotators who identified:

  1. Temporal boundaries (start and end frames) of interaction events
  2. Pedestrian behaviors observed during the segment
  3. Vehicle responses and actions
  4. Environmental and contextual factors
  5. Behavioral archetype classifications

Annotation Quality

  • Frame-level precision: Annotations specify exact frame numbers for temporal accuracy
  • Multi-label approach: Multiple tags can be assigned to capture complex behaviors
  • Contextual completeness: Each annotation includes pedestrian, vehicle, environment, and archetype information

Limitations

  1. Video Quality: Source videos vary in resolution, frame rate, and quality
  2. Annotation Subjectivity: Some behavioral interpretations may contain subjective elements
  3. Geographic Bias: Dataset may over-represent certain regions or traffic cultures
  4. Scenario Coverage: May not capture all possible pedestrian-vehicle interaction types
  5. Temporal Resolution: Frame-level annotations dependent on source video FPS

Technical Specifications

  • Frame Rate: Typically 30 FPS (varies by source video)
  • Annotation Format: CSV with comma-separated multi-label tags
  • Video Access: Via YouTube URLs (requires internet connection and YouTube API access)
  • Recommended Processing: Frame extraction from videos using provided frame numbers