PuMVR-Dataset / README.md
Prabhjotschugh's picture
Update README.md
f8160c6 verified
metadata
pretty_name: Punjabi Multimodal Visual Reasoning (PuMVR)
tags:
  - multimodal
  - visual-question-answering
  - multi-script
  - low-resource-language
  - punjabi
  - image-to-text
  - multiple-choice
language:
  - pa
  - en
language_bcp47:
  - pa-Guru
  - pa-Arab
  - pa-Latn
task_categories:
  - visual-question-answering
  - image-to-text
  - multiple-choice
  - question-answering
license: cc-by-4.0
configs:
  - config_name: default
multilinguality: multi-script
annotations_creators:
  - human
language_creators:
  - native-speakers
size_categories:
  - 100M<X<1B

PuMVR: Punjabi Multimodal Visual Reasoning Benchmark

๐ŸŒŸ Dataset Overview

PuMVR (Punjabi Multimodal Visual Reasoning) is a novel benchmark designed to evaluate script-dependent performance biases in Vision-Language Models (VLMs). It addresses the critical gap that current VLM evaluations fail to test whether models are truly multi-script, a distinction vital for languages like Punjabi which are actively written in multiple scripts.

The dataset features 375 unique image-text reasoning tasks focused on Punjabi culture, history, and daily life. All instances are translated and rigorously validated across the three active Punjabi writing systems: Gurmukhi (pa-Guru), Shahmukhi (pa-Arab), and Roman (pa-Latn).

  • Total Instances: 375
  • Total Size: 541 MB
  • Language: Punjabi (pa) with three distinct script variants.
  • Target Models: State-of-the-art VLMs

๐Ÿ“Š Dataset Structure and Statistics

The dataset is organized into a single split (train) and is composed of image data and corresponding textual annotations stored in a JSON file.

Data Fields

The dataset schema contains all necessary components for running multiple-choice VQA across three scripts:

Field Name Data Type Description
id string Unique identifier (e.g., C1_001).
category string The specific task category (1 of 6).
image Image The associated visual input (decoded from the file path).
reasoning string Human-written explanation for the ground truth answer (in English).
scripts_[script]_question string The reasoning question in the specified script.
scripts_[script]_options list[string] 4 multiple-choice options in the specified script.
scripts_[script]_answer string The single correct option in the specified script.

(The [script] placeholder is one of: gurmukhi, shahmukhi, or roman.)

Task Categories

The 375 instances are distributed across 6 categories, ensuring a comprehensive test of multimodal script robustness:

  1. Visual Analogies: Tests relational reasoning (e.g., Turban:Head :: Shoe:?).
  2. Cultural Object Recognition: Tests knowledge of Punjabi-specific cultural items (e.g., Phulkari, musical instruments).
  3. Festival & Celebration Reasoning: Tests cultural knowledge and temporal reasoning around regional events (e.g., Lohri).
  4. Architectural & Landmark Recognition: Tests visual and geographic grounding of regional landmarks (e.g., Golden Temple).
  5. Text-in-Image Reasoning: Tests cross-script OCR and multimodal comprehension, including scenarios where image text and question text scripts are mismatched.
  6. Abstract Visual-Linguistic Reasoning: Tests basic spatial and logical reasoning with Punjabi language labels.

โš–๏ธ Ethical and Legal Considerations

Licenses

  • Data: The PuMVR dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
  • Images: Majority (approximately 95%) of the images are AI-generated (synthetic data) to ensure cultural specificity and clear licensing. The remaining images are sourced from public domain, Wikimedia Commons, and original photography.

Data Creation and Validation

The textual data was created and rigorously validated by a team of native speakers across both Indian and Pakistani Punjabi contexts to ensure semantic equivalence and cultural appropriateness across the Gurmukhi, Shahmukhi, and Roman scripts.

Limitations

The dataset is highly focused on Punjabi culture, which introduces a domain-specific bias. The Romanization used reflects common digital usage but is not strictly standardized, mirroring real-world multi-script challenges.