EC-VCR / README.md
Mohamed-Gamil's picture
Upload dataset
0227ce0 verified
metadata
dataset_info:
  features:
    - name: img_fn
      dtype: image
    - name: metadata_fn
      dtype: string
    - name: width
      dtype: int64
    - name: height
      dtype: int64
    - name: boxes
      dtype: string
    - name: objects
      dtype: string
    - name: segms
      dtype: string
    - name: keywords
      dtype: string
    - name: question_orig
      dtype: string
    - name: question
      dtype: string
    - name: answer_choices
      dtype: string
    - name: answer_orig
      dtype: string
    - name: answer_label
      dtype: int64
  splits:
    - name: train
      num_bytes: 46088868
      num_examples: 104
  download_size: 40145967
  dataset_size: 46088868
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: mit
language:
  - en
tags:
  - egypt
  - culture
  - arab
  - vision
  - language
  - LLM
  - VLM
  - VCR
  - common-sense-reasoning
  - multimodal

Dataset Summary

EC-VCR (Egyptian Culture Visual Commonsense Reasoning) is a multimodal benchmark designed to evaluate the cultural reasoning capabilities of Vision-Language Models (VLMs) within the specific context of Egypt.

Inspired by the methodology of GD-VCR (Geo-Diverse Visual Commonsense Reasoning), this dataset moves beyond simple recognition ("What is this?") to high-order cognitive reasoning ("Why is this person performing this action?" or "What social event is taking place?"). It addresses the "cultural blind spot" in current AI models by focusing on scenarios unique to Egyptian daily life, traditions, and social dynamics.

This dataset is structured to support Visual Question Answering (VQA) and Visual Commonsense Reasoning (VCR) tasks, providing rich annotations including bounding boxes, object labels, and segmentation masks.

Supported Tasks

  • Visual Commonsense Reasoning (VCR): Answering "Why" and "How" questions that require external cultural knowledge.
  • Visual Question Answering (VQA): Standard question-answering based on image content.
  • Object Detection: Leveraging the provided bounding boxes and object tags.

Dataset Structure

Data Instances

Each instance in the dataset represents a single question-answer pair associated with an image and its corresponding visual annotations.

{
  "img_fn": "EC-VCR/1.jpg",
  "metadata_fn": "EC-VCR/1.json",
  "width": 1920,
  "height": 1080,
  "boxes": [[100, 200, 50, 80], [300, 400, 60, 90]],
  "objects": ["person", "car"],
  "segms": [[[100, 200, 105, 205, ...]], [[300, 400, ...]]],
  "keywords": ["wedding", "street", "celebration"],
  "question_orig": "Why are [person1] and [person2] wearing matching outfits?",
  "question": ["Why", "are", [0]", "and", "[1]", "wearing", "matching", "outfits", "?"],
  "answer_orig": [
    "They are participating in a local festival procession.",
    "They are security guards for the building.",
    "They are part of a wedding entourage.",
    "They are casually walking to work."
  ],
  "answer_label": 2
}

Data Fields

  • img_fn: String. The relative path to the image file.
  • metadata_fn: String. The relative path to the source JSON containing segmentation and detailed metadata.
  • width: Integer. The width of the image in pixels.
  • height: Integer. The height of the image in pixels.
  • boxes: List of Lists. Bounding boxes for detected objects formatted as [x1, y1, x2, y2] (or [x, y, w, h] depending on your specific format).
  • objects: List of Strings. Class labels corresponding to the detected objects in boxes.
  • segms: List of Lists. Polygon points representing the segmentation masks for each object.
  • keywords: List of Strings. Categorical tags describing the scene context (e.g., "festival", "market").
  • question_orig: String. The raw, natural language question string, often containing tags like [person1] to reference specific bounding boxes.
  • question: List of Strings. The tokenized or parsed version of the question, separating tags and punctuation for model input.
  • answer_orig: List of Strings. The list of possible answer choices (candidates) for the multiple-choice task.
  • answer_label: Integer. The zero-based index pointing to the correct answer in the answer_orig list.

Dataset Creation

Curation Rationale

Standard VCR datasets are heavily skewed toward Western contexts. As highlighted by the GD-VCR paper, models trained on these datasets fail to generalize to non-Western regions. EC-VCR fills this gap for Egypt, covering local customs, street scenes, and social interactions that global models often misinterpret.

Source Data

The images are collected and curated from movies, documentries and other online sources.

(Note: You can add specific details here about your source, e.g., "Images were collected from Egyptian movies, TV series, and public domain cultural photography," similar to the GD-VCR methodology.)

Annotation Process

The dataset follows a VCR-style annotation pipeline:

  1. Object Detection: Key objects are localized using bounding boxes and segmentation masks (Detectron2 package was used).
  2. Question Generation: Questions are designed to be high-order, requiring the model to combine visual cues (detected objects) with implicit cultural knowledge.

Usage

Loading the Dataset

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("YourUsername/EC-VCR")

# Access an example
example = dataset['train'][0]
image = example['image']
question = example['question']
annotations = example['boxes']

print(f"Question: {question}")
image.show()

Benchmarking & Evaluation

EC-VCR is designed to test Cultural alignment. High accuracy on this dataset indicates that a model understands:

  1. Visual Recognition: Identifying local objects (e.g., Fanoos).
  2. Social Reasoning: Understanding the intent and context behind actions in an Egyptian setting (e.g., distinct gestures, seating arrangements, or ceremonial traditions).

Citation

If you use this dataset, please cite the following work:

@misc{gamil2025ecvcr,
  author = {Mohamed Gamil and Abdelrahman Elsayed and Abdelrahman Lila and Ahmed Gad and Hesham Abdelgawad and Mohamed Aref and Ahmed Fares},
  title = {EC-VCR: A Visual Commonsense Reasoning Benchmark for Egyptian Culture},
  year = {2026},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/CulTex-VLM/EG-VCR}}
}

Methodology inspired by:

@article{yin2021broaden,
  title={Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning},
  author={Yin, Da and Li, Liunian Harold and Hu, Ziniu and Peng, Nanyun and Chang, Kai-Wei},
  journal={arXiv preprint arXiv:2109.06860},
  year={2021}
}