The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Multimodal Mechanistic Interpretability Benchmark (MMIB) Dataset
The MMIB Dataset is a highly controlled, synthetic vision-language dataset designed to rigorously evaluate mechanistic interpretability (MI) methods in Large Multimodal Models (VLMs). Built upon procedurally generated CLEVR-style assets, this dataset provides exact ground-truth causal pathways to test whether MI techniques (like causal tracing or interchange interventions) localize genuine cognitive circuits or merely identify descriptive correlations.
Unlike standard VQA benchmarks, MMIB uses strict automated rejection sampling to eliminate geometric ambiguity, ensuring every spatial and causal relationship is mathematically verifiable.
Dataset Structure & Interventions
This dataset is built on a structured intervention triplet for every base scene. Each row provides a complete $3 \times 3$ cross-modal evaluation matrix (3 images $\times$ 3 text queries), allowing researchers to systematically trace cross-modal information flow.
1. Semantic Counterfactuals (Causal Reasoning)
To evaluate the model's internal causal logic, we generate minimal counterfactual pairs where the intervention mathematically guarantees a change in the ground-truth answer ($y' \neq y$).
- Image-Based CFs: 10 targeted 3D scene graph edits (e.g.,
change_color,change_position,relational_flip) that alter the visual logic while keeping the question fixed. - Text-Based CFs: Minimal deterministic mutations to the textual query (e.g., swapping "red" for "blue" or "left" for "right") that guarantee an answer flip on the fixed base image.
2. Negative Counterfactuals (Diagnostic Stress Tests)
To control for basic visual fragility, we generate Negative Counterfactuals featuring 8 types of perceptual corruptions (e.g., add_noise, change_lighting, apply_fisheye). These interventions drastically alter the image distribution without changing the underlying 3D geometry or ground-truth answer ($y' = y$). They serve as an experimental baseline: if a model fails on these stress tests, its failure on semantic tasks indicates vulnerability to domain shifts rather than flawed causal logic.
Using the Dataset
Loading from Python
The dataset is hosted in standard Parquet format. You can load it directly into your mechanistic evaluation pipeline using the Hugging Face datasets library:
from datasets import load_dataset
# Load the MMIB dataset
ds = load_dataset("scholo/MMB_dataset", split="train")
# Inspect the 3x3 evaluation matrix for the first scene
print("Base Question:", ds[0]['original_question'])
print("Base Image -> Base Question Answer:", ds[0]['original_image_answer_to_original_question'])
print("Semantic CF Image -> Base Question Answer:", ds[0]['cf1_image_answer_to_original_question'])
(No trust_remote_code=True is required.)Directory StructureMMB-Dataset/
βββ README.md # This dataset card
βββ .gitattributes # Git LFS configuration
βββ data/ # Dataset files (Parquet format)
β βββ train.parquet # Main benchmark matrix
βββ Dataset/ # Raw generation artifacts
β βββ images/ # Uncompressed PNG renders (720x720)
β βββ scenes/ # JSON 3D scene graphs and metadata
β βββ image_mapping_with_questions.csv # Source mapping for the 3x3 grid
β βββ run_metadata.json # Procedural generation engine parameters
Application & ProtocolFollowing the rigorous evaluation protocol established in the MMIB paper, interpretability metrics (such as Circuit Performance Ratio, Circuit-Model Distance, and Interchange Intervention Accuracy) should only be computed on samples where the target VLM correctly answers the base question ($a_b = y$). This behavioral filter ensures that the model possesses the causal circuit prior to mechanistic evaluation.LicenseMIT
- Downloads last month
- 7