Upload BENCHMARK_CARD.md
Browse files- BENCHMARK_CARD.md +138 -0
BENCHMARK_CARD.md
ADDED
|
@@ -0,0 +1,138 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TabMI-Bench — Benchmark Datasheet
|
| 2 |
+
|
| 3 |
+
Following Gebru et al. (2021) "Datasheets for Datasets" and Mitchell et al. (2019) "Model Cards" templates.
|
| 4 |
+
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
## 1. Motivation
|
| 8 |
+
|
| 9 |
+
- **Purpose**: TabMI-Bench provides a standardized protocol for evaluating mechanistic interpretability (MI) methods on Tabular Foundation Models (TFMs). It fills the gap between mature LLM MI evaluation (e.g., MIB) and the growing but uninterpreted TFM ecosystem.
|
| 10 |
+
- **Creators**: Anonymous (double-blind submission). Will be de-anonymized upon acceptance.
|
| 11 |
+
- **Funding**: [To be disclosed upon acceptance]
|
| 12 |
+
- **Date of creation**: January–April 2026
|
| 13 |
+
|
| 14 |
+
## 2. Composition
|
| 15 |
+
|
| 16 |
+
### Benchmark Components
|
| 17 |
+
|
| 18 |
+
| Component | Description | Files | Format |
|
| 19 |
+
|-----------|-------------|-------|--------|
|
| 20 |
+
| Hook infrastructure | Activation extraction for 5 TFMs | `src/hooks/*.py` (10 files) | Python |
|
| 21 |
+
| Synthetic data generators | 4 function types (bilinear, sinusoidal, polynomial, mixed) | `src/data/synthetic_generator.py` | Python |
|
| 22 |
+
| Real-world dataset loaders | 11 causal tracing + 4 steering datasets | `src/data/real_world_datasets.py` | Python (scikit-learn, OpenML) |
|
| 23 |
+
| Evaluation scripts | Experiment scripts for all MI techniques | `experiments/*.py` | Python |
|
| 24 |
+
| Reference baselines | Multi-seed results for 5 models | `results/` | JSON |
|
| 25 |
+
| Negative controls | Random-target, shuffled-label, random-vector | `experiments/neurips_c1c2_baselines_*.py` | Python |
|
| 26 |
+
| Benchmark card | This document | `BENCHMARK_CARD.md` | Markdown |
|
| 27 |
+
| Croissant metadata | Machine-readable metadata | `croissant.json` | JSON-LD |
|
| 28 |
+
| Example: add new model | Worked extensibility example | `examples/add_new_model.py` | Python |
|
| 29 |
+
|
| 30 |
+
### Instance Counts
|
| 31 |
+
|
| 32 |
+
- **Synthetic probes**: 4 function families × configurable N (default N_train=100, N_test=50)
|
| 33 |
+
- **Real-world datasets**: 11 for causal tracing, 4 for steering (all public, from scikit-learn/OpenML)
|
| 34 |
+
- **Models**: 5 TFMs (TabPFN v2, TabPFN v2.5, TabICL v2, TabDPT, iLTM)
|
| 35 |
+
- **MI techniques**: 8 (linear probing, logit lens, activation patching, vector ablation, steering vectors, SAE, CKA, attention analysis)
|
| 36 |
+
- **Reference seeds**: 5 (core probing on TabPFN/TabICL/iLTM), 3 (TabDPT in-family holdout, N=10K probing scale, real-world causal), 5 (NAM out-of-family holdout, real-world steering, function invariance), 10 (robust steering, SAE expansion)
|
| 37 |
+
|
| 38 |
+
### Data Types
|
| 39 |
+
|
| 40 |
+
- Synthetic: continuous numeric features only (no categorical, no missing values)
|
| 41 |
+
- Real-world: mixed (numeric + categorical with encoding; no explicit missing value handling in synthetic core)
|
| 42 |
+
|
| 43 |
+
### Sensitive Information
|
| 44 |
+
|
| 45 |
+
- No personally identifiable information (PII) in synthetic data
|
| 46 |
+
- Real-world datasets inherit properties of their public sources:
|
| 47 |
+
- Adult Income (OpenML 1590): contains age, sex, race attributes
|
| 48 |
+
- California Housing: geographic data reflecting historical patterns
|
| 49 |
+
- The benchmark evaluates MI methods, not model fairness
|
| 50 |
+
|
| 51 |
+
### Instances Not Included
|
| 52 |
+
|
| 53 |
+
- Concrete dataset excluded due to OpenML cache failure
|
| 54 |
+
- Closed-source TFMs excluded (no checkpoint access)
|
| 55 |
+
- Non-ICL tabular methods (XGBoost, LightGBM) excluded by design
|
| 56 |
+
|
| 57 |
+
## 3. Collection Process
|
| 58 |
+
|
| 59 |
+
- **Synthetic data**: Generated deterministically from mathematical functions with fixed random seeds. No human data collection.
|
| 60 |
+
- **Real-world data**: Loaded from scikit-learn built-in datasets and OpenML public repositories at runtime. No new data collection.
|
| 61 |
+
- **Model activations**: Extracted via forward hooks registered on PyTorch modules during inference. No model training performed.
|
| 62 |
+
- **Timeframe**: Experiments conducted January–March 2026
|
| 63 |
+
|
| 64 |
+
## 4. Preprocessing
|
| 65 |
+
|
| 66 |
+
- **Synthetic**: Features generated from known distributions; targets computed from closed-form functions with Gaussian noise.
|
| 67 |
+
- **Real-world**: StandardScaler applied to features (zero mean, unit variance). Train/test split with fixed random seed.
|
| 68 |
+
- **Activations**: Raw layer outputs stored as float64 numpy arrays. No dimensionality reduction applied before probing.
|
| 69 |
+
|
| 70 |
+
## 5. Uses
|
| 71 |
+
|
| 72 |
+
### Intended Uses
|
| 73 |
+
|
| 74 |
+
1. **Evaluating new MI techniques**: Run the 4-step protocol, compare results against reference baselines
|
| 75 |
+
2. **Profiling new TFMs**: Implement hook interface (~150 lines), run existing scripts
|
| 76 |
+
3. **Educational**: Understanding how different TFM architectures process tabular data internally
|
| 77 |
+
4. **Reproducibility**: Independent verification of the reported computation profiles
|
| 78 |
+
|
| 79 |
+
### Not Intended For
|
| 80 |
+
|
| 81 |
+
- **Deployment certification**: Benchmark outputs are diagnostic measurements, NOT safety/fairness/compliance certifications
|
| 82 |
+
- **Model ranking**: TabMI-Bench is a protocol benchmark, not a scored leaderboard
|
| 83 |
+
- **Prediction accuracy comparison**: Use TabZilla, TabReD, or OpenML-CC18 for predictive performance
|
| 84 |
+
- **Security auditing**: The benchmark does not test adversarial robustness or data poisoning
|
| 85 |
+
|
| 86 |
+
### Potential Misuse
|
| 87 |
+
|
| 88 |
+
- Over-interpreting "staged" or "distributed" labels as complete model understanding
|
| 89 |
+
- Using steering controllability results as evidence of safe/aligned controllability
|
| 90 |
+
- Treating benchmark pass as regulatory compliance evidence
|
| 91 |
+
|
| 92 |
+
## 6. Distribution
|
| 93 |
+
|
| 94 |
+
- **License**: MIT (see `LICENSE` file)
|
| 95 |
+
- **Hosting**:
|
| 96 |
+
- Code and scripts: GitHub (anonymous during review, public upon acceptance)
|
| 97 |
+
- Archived artifacts: Zenodo (DOI-minted for long-term persistence)
|
| 98 |
+
- ML-ecosystem discovery: HuggingFace Datasets (Croissant-annotated)
|
| 99 |
+
- **Access**: Open access. No registration required for code or synthetic data. Real-world datasets require internet for OpenML downloads.
|
| 100 |
+
- **Export controls**: None. No dual-use concerns.
|
| 101 |
+
|
| 102 |
+
## 7. Maintenance
|
| 103 |
+
|
| 104 |
+
- **Maintainers**: Paper authors (contact: [upon acceptance])
|
| 105 |
+
- **Update policy**: Version-pinned snapshots for reproducibility. New model hooks and reference baselines added as TFMs are released.
|
| 106 |
+
- **Versioning**: Semantic versioning (major.minor.patch). Breaking changes increment major version.
|
| 107 |
+
- **Community contributions**: Pull requests welcomed for new model hook implementations
|
| 108 |
+
- **Deprecation**: Individual model hooks may be deprecated when models are superseded, but archived results are preserved
|
| 109 |
+
|
| 110 |
+
## 8. Known Limitations
|
| 111 |
+
|
| 112 |
+
| Limitation | Scope | Mitigation |
|
| 113 |
+
|-----------|-------|------------|
|
| 114 |
+
| Small scale (N=100) | Core experiments | N=10K scale check preserves patterns |
|
| 115 |
+
| Regression only (synthetic) | Core probes | Classification probing in appendix |
|
| 116 |
+
| 5 models, 3 families | Taxonomy scope | Holdout validation on TabDPT (3-seed) and NAM (5-seed); extensible hook design |
|
| 117 |
+
| Post-hoc taxonomy | Scientific claim | Profile variance metric + ±50% threshold robustness + leave-one-function-out (LOFO) verified primary-endpoint ordering with ratio >29× in every holdout |
|
| 118 |
+
| N=10K steering sub-check is single-seed | Evidence tier | Probing component uses 3 seeds; steering sub-check labeled supporting |
|
| 119 |
+
| No categorical synthetic features | Probe design | Real-world datasets include categorical |
|
| 120 |
+
| Probing collapses at d≥8 | Method limitation | Causal tracing remains informative |
|
| 121 |
+
|
| 122 |
+
## 9. Ethical Considerations
|
| 123 |
+
|
| 124 |
+
- The benchmark studies existing open-source models and does not create new capabilities
|
| 125 |
+
- Real-world datasets are publicly available and widely used in ML research
|
| 126 |
+
- Steering demonstrations should NOT be interpreted as evidence of safe model controllability
|
| 127 |
+
- Users should not rely on benchmark outputs for regulatory compliance without additional validation
|
| 128 |
+
|
| 129 |
+
## 10. Citation
|
| 130 |
+
|
| 131 |
+
```bibtex
|
| 132 |
+
@inproceedings{anonymous2026tabmibench,
|
| 133 |
+
title={TabMI-Bench: Evaluating Mechanistic Interpretability Methods Across Tabular Foundation Model Architectures},
|
| 134 |
+
author={Anonymous},
|
| 135 |
+
booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
|
| 136 |
+
year={2026}
|
| 137 |
+
}
|
| 138 |
+
```
|