license: mit
language:
- en
pretty_name: TabMI-Bench
tags:
- mechanistic-interpretability
- tabular-foundation-models
- tabpfn
- tabicl
- tabdpt
- iltm
- benchmark
- protocol-benchmark
- neurips-2026
size_categories:
- n<1K
configs:
- config_name: rd5_fullscale_aggregated
data_files: rd5_fullscale_aggregated.json
- config_name: tabdpt_probing_3seed
data_files: tabdpt_probing_3seed.json
- config_name: tabdpt_causal_3seed
data_files: tabdpt_causal_3seed.json
- config_name: nam_holdout
data_files: nam_holdout.json
- config_name: lofo_primary_endpoint
data_files: lofo_primary_endpoint.json
- config_name: c1c2_baselines
data_files: c1c2_baselines.json
- config_name: scale_10k_multiseed
data_files: scale_10k_multiseed.json
- config_name: tabpfn25_fullscale_aggregated
data_files: tabpfn25_fullscale_aggregated.json
TabMI-Bench
A protocol benchmark for mechanistic interpretability (MI) of tabular foundation models (TFMs). NeurIPS 2026 Evaluations & Datasets Track submission.
What's in this dataset
This Hugging Face repository hosts the frozen aggregated artifacts that drive every numbered table and figure in the paper. Bundling these allows reviewers to verify the paper's key numerics without re-running 40 GPU-hours of experiments.
| File | Source experiment | Used by |
|---|---|---|
rd5_fullscale_aggregated.json |
Phase 5 multi-seed core comparison (5 seeds × 3 models × 4 functions) | Table 1 (3 strategies), Figure 1 |
tabdpt_probing_3seed.json |
TabDPT in-family holdout probing (3 seeds) | Table 1 TabDPT row, §4.3 |
tabdpt_causal_3seed.json |
TabDPT noising-based causal tracing (3 seeds) | §4.4 TabDPT causal claim |
nam_holdout.json |
NAM out-of-family holdout (5 seeds) | §4.3 NAM holdout boundary case |
lofo_primary_endpoint.json |
Leave-one-function-out robustness on primary endpoint | LOFO appendix |
c1c2_baselines.json |
Shuffled-label and random-target negative controls | §4.6, Tables 21–22 |
scale_10k_multiseed.json |
N=10K probing scale validation (3 seeds) | Appendix G |
tabpfn25_fullscale_aggregated.json |
TabPFN v2 vs v2.5 comparison (3 seeds) | Table 7, §G.1 |
Code & full benchmark suite
The hooks, synthetic probe generators, evaluation scripts, statistical analysis, figure generation, and tests are hosted at: https://github.com/evaldataset/TabMI-Bench
To regenerate paper-facing tables and figures from this dataset without GPU access:
git clone https://github.com/evaldataset/TabMI-Bench
cd TabMI-Bench
pip install -r requirements.txt
make reproduce-paper-frozen
What is TabMI-Bench?
TabMI-Bench provides:
- Hook-based activation extraction for 5 TFMs spanning 3 architectural families (TabPFN v2/v2.5, TabICL v2, TabDPT, iLTM) plus NAM out-of-family holdout
- 4 controlled synthetic probe families (bilinear, sinusoidal, polynomial, mixed) with known ground-truth intermediary variables
- 4-step evaluation protocol (synthetic profile → causal validation → negative controls → real-world transfer)
- Evidence-coded MI applicability matrix (8 techniques × 4 architectures with seed-count superscripts)
- A primary diagnostic finding: whole-layer clean activation patching is uninformative on ICL-style TFMs due to deterministic cascading; corruption-based (noising) tracing is the informative alternative.
Three descriptive reference computation profiles emerge as calibration baselines:
- Staged (TabPFN): U-shaped intermediary recoverability with mid-layer concentration
- Distributed (TabICL, TabDPT): uniformly high recoverability across layers
- Preprocessing-dominant (iLTM): tree+PCA preprocessing performs the heavy lifting
Croissant 1.0 metadata
The repository includes croissant.json with 12 RAI fields (data limitations, biases, sensitive information, use cases, social impact, synthetic data flag, source datasets, provenance, collection, maintenance plan, etc.). See croissant.json in the repository.
Source datasets
Real-world evaluation uses public datasets only (no new collection):
| Dataset | Source | License | Use |
|---|---|---|---|
| California Housing | OpenML 8092 | CC0 | causal tracing + steering |
| Diabetes | scikit-learn | BSD-3-Clause | causal tracing + steering |
| Wine Quality | OpenML 287 | CC0 | steering |
| Bike Sharing | OpenML 44063 | CC0 | steering |
| Abalone, Boston, Energy, Breast Cancer, Iris, Adult, Credit-G | OpenML / scikit-learn | CC0 / BSD-3-Clause | causal tracing |
Citation
@inproceedings{anonymous2026tabmibench,
title={TabMI-Bench: Evaluating Mechanistic Interpretability Methods Across Tabular Foundation Model Architectures},
author={Anonymous},
booktitle={Advances in Neural Information Processing Systems (NeurIPS) Evaluations \& Datasets Track},
year={2026}
}
License
MIT. See LICENSE.