AnonymousECCV15285's picture
Update README.md
5745473 verified
metadata
license: mit
pretty_name: MMIB Evaluation Dataset
task_categories:
  - visual-question-answering
  - multiple-choice
language:
  - en
tags:
  - vision
  - language
  - multimodal
  - counterfactual
  - mechanistic-interpretability
  - synthetic
size_categories:
  - n<1K
dataset_info:
  features:
    - name: original_image
      dtype: image
    - name: counterfactual1_image
      dtype: image
    - name: counterfactual2_image
      dtype: image
    - name: counterfactual1_type
      dtype: string
    - name: counterfactual2_type
      dtype: string
    - name: counterfactual1_description
      dtype: string
    - name: counterfactual2_description
      dtype: string
    - name: original_question
      dtype: string
    - name: counterfactual1_question
      dtype: string
    - name: counterfactual2_question
      dtype: string
    - name: original_question_difficulty
      dtype: string
    - name: counterfactual1_question_difficulty
      dtype: string
    - name: counterfactual2_question_difficulty
      dtype: string
    - name: original_image_answer_to_original_question
      dtype: string
    - name: original_image_answer_to_cf1_question
      dtype: string
    - name: original_image_answer_to_cf2_question
      dtype: string
    - name: cf1_image_answer_to_original_question
      dtype: string
    - name: cf1_image_answer_to_cf1_question
      dtype: string
    - name: cf1_image_answer_to_cf2_question
      dtype: string
    - name: cf2_image_answer_to_original_question
      dtype: string
    - name: cf2_image_answer_to_cf1_question
      dtype: string
    - name: cf2_image_answer_to_cf2_question
      dtype: string
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Multimodal Mechanistic Interpretability Benchmark (MMIB) Dataset

The MMIB Dataset is a highly controlled, synthetic vision-language dataset designed to rigorously evaluate mechanistic interpretability (MI) methods in Large Multimodal Models (VLMs). Built upon procedurally generated CLEVR-style assets, this dataset provides exact ground-truth causal pathways to test whether MI techniques (like causal tracing or interchange interventions) localize genuine cognitive circuits or merely identify descriptive correlations.

Unlike standard VQA benchmarks, MMIB uses strict automated rejection sampling to eliminate geometric ambiguity, ensuring every spatial and causal relationship is mathematically verifiable.

Dataset Structure & Interventions

This dataset is built on a structured intervention triplet for every base scene. Each row provides a complete $3 \times 3$ cross-modal evaluation matrix (3 images $\times$ 3 text queries), allowing researchers to systematically trace cross-modal information flow.

1. Semantic Counterfactuals (Causal Reasoning)

To evaluate the model's internal causal logic, we generate minimal counterfactual pairs where the intervention mathematically guarantees a change in the ground-truth answer ($y' \neq y$).

  • Image-Based CFs: 10 targeted 3D scene graph edits (e.g., change_color, change_position, relational_flip) that alter the visual logic while keeping the question fixed.
  • Text-Based CFs: Minimal deterministic mutations to the textual query (e.g., swapping "red" for "blue" or "left" for "right") that guarantee an answer flip on the fixed base image.

2. Negative Counterfactuals (Diagnostic Stress Tests)

To control for basic visual fragility, we generate Negative Counterfactuals featuring 8 types of perceptual corruptions (e.g., add_noise, change_lighting, apply_fisheye). These interventions drastically alter the image distribution without changing the underlying 3D geometry or ground-truth answer ($y' = y$). They serve as an experimental baseline: if a model fails on these stress tests, its failure on semantic tasks indicates vulnerability to domain shifts rather than flawed causal logic.

Using the Dataset

Loading from Python

The dataset is hosted in standard Parquet format. You can load it directly into your mechanistic evaluation pipeline using the Hugging Face datasets library:

from datasets import load_dataset

# Load the MMIB dataset
ds = load_dataset("scholo/MMB_dataset", split="train")

# Inspect the 3x3 evaluation matrix for the first scene
print("Base Question:", ds[0]['original_question'])
print("Base Image -> Base Question Answer:", ds[0]['original_image_answer_to_original_question'])
print("Semantic CF Image -> Base Question Answer:", ds[0]['cf1_image_answer_to_original_question'])
(No trust_remote_code=True is required.)Directory StructureMMB-Dataset/
├── README.md                           # This dataset card
├── .gitattributes                      # Git LFS configuration
├── data/                               # Dataset files (Parquet format)
│   └── train.parquet                   # Main benchmark matrix
├── Dataset/                            # Raw generation artifacts
│   ├── images/                         # Uncompressed PNG renders (720x720)
│   ├── scenes/                         # JSON 3D scene graphs and metadata
│   ├── image_mapping_with_questions.csv # Source mapping for the 3x3 grid
│   └── run_metadata.json               # Procedural generation engine parameters
Application & ProtocolFollowing the rigorous evaluation protocol established in the MMIB paper, interpretability metrics (such as Circuit Performance Ratio, Circuit-Model Distance, and Interchange Intervention Accuracy) should only be computed on samples where the target VLM correctly answers the base question ($a_b = y$). This behavioral filter ensures that the model possesses the causal circuit prior to mechanistic evaluation.LicenseMIT