You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

VLM Reality Check: A Causal-Contrastive Benchmark for Vision-Language Models

VLM Reality Check is a large-scale, automatically generated dataset designed to evaluate the causal reasoning and bias resistance of Vision-Language Models (VLMs). It features nearly 100,000 challenges across 14 bias types, utilizing counterfactual image transforms to create minimal-edit contrastive pairs.

Dataset Summary

  • Total Challenges: 95,317
  • Source Images: 1,868 (from Kaggle/ECCV v3 pipeline)
  • Counterfactual Images: 27,728
  • Bias Types: 14 (Spatial, Compositional, Counting, Cultural, etc.)
  • Languages: 5 (English, Spanish, Chinese, Hindi, Arabic)
  • Total Multilingual Instances: 476,585
  • Size: ~3.9 GB

Bias Types and Distribution

Bias Type Count
Temporal Reasoning 12,001
Counting 9,932
Cultural Visual Bias 9,281
Texture 8,817
Spatial Relations 8,691
Physical Plausibility 8,433
Temporal Consistency 8,172
Text in Image 7,545
Occlusion Gradient 6,176
Scale Invariance 6,064
Typography Conflict 6,063
Compositional Binding 2,191
Compound 1,025
Spurious Correlation 926

Difficulty Distribution

  • Easy: 34,195
  • Medium: 38,902
  • Hard: 22,220

Dataset Structure

  • images/: Original source images.
  • counterfactuals/: Generated counterfactual images (split into subdirectories 0, 1, 2 to comply with directory limits).
  • annotations/: YOLO-based detections and metadata for source images.
  • challenges/: The core challenge JSONL files.
  • translations/: Multilingual versions of the challenges.
  • configs/: Generation configurations.
  • final_dataset.jsonl: The complete dataset in a single file.

How to Use

Each entry in final_dataset.jsonl contains:

  • challenge_id: Unique identifier.
  • bias_type: The category of visual reasoning tested.
  • image_a_id / image_b_id: References to images in images/ or counterfactuals/.
  • question_translated: The VQA question.
  • correct_answer: The ground truth.
  • distractor_answers: Incorrect but plausible options.

License

This dataset is released under the Apache 2.0 License.

Downloads last month
7