The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'n_classes'})
This happened while the csv dataset builder was generating data using
hf://datasets/super-anonymous-researcher/CalArena/cv-multiclass-experiments.csv (at revision fbfcf248b36efb7244b02e97752e2597c8a52630), [/tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/cv-binary-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/cv-binary-experiments.csv), /tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/cv-multiclass-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/cv-multiclass-experiments.csv), /tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/imagenet-multiclass-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/imagenet-multiclass-experiments.csv), /tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/tabarena-binary-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/tabarena-binary-experiments.csv), /tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/tabarena-multiclass-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/tabarena-multiclass-experiments.csv), /tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/tabrepo-binary-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/tabrepo-binary-experiments.csv), /tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/tabrepo-multiclass-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/tabrepo-multiclass-experiments.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1800, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 765, in write_table
self._write_table(pa_table, writer_batch_size=writer_batch_size)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 773, in _write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
dataset: string
model: string
cal_size: int64
test_size: int64
n_classes: int64
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 827
to
{'dataset': Value('string'), 'model': Value('string'), 'cal_size': Value('int64'), 'test_size': Value('int64')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 882, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 943, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1646, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1802, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'n_classes'})
This happened while the csv dataset builder was generating data using
hf://datasets/super-anonymous-researcher/CalArena/cv-multiclass-experiments.csv (at revision fbfcf248b36efb7244b02e97752e2597c8a52630), [/tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/cv-binary-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/cv-binary-experiments.csv), /tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/cv-multiclass-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/cv-multiclass-experiments.csv), /tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/imagenet-multiclass-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/imagenet-multiclass-experiments.csv), /tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/tabarena-binary-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/tabarena-binary-experiments.csv), /tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/tabarena-multiclass-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/tabarena-multiclass-experiments.csv), /tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/tabrepo-binary-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/tabrepo-binary-experiments.csv), /tmp/hf-datasets-cache/medium/datasets/88024734966120-config-parquet-and-info-super-anonymous-researche-dbbd54b0/hub/datasets--super-anonymous-researcher--CalArena/snapshots/fbfcf248b36efb7244b02e97752e2597c8a52630/tabrepo-multiclass-experiments.csv (origin=hf://datasets/super-anonymous-researcher/CalArena@fbfcf248b36efb7244b02e97752e2597c8a52630/tabrepo-multiclass-experiments.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
dataset string | model string | cal_size int64 | test_size int64 |
|---|---|---|---|
c10 | densenet40 | 5,000 | 10,000 |
c10 | lenet5 | 5,000 | 10,000 |
c10 | resnet_wide32 | 5,000 | 10,000 |
c10 | resnet110 | 5,000 | 10,000 |
c10 | resnet110_SD | 5,000 | 10,000 |
breast | beit | 78 | 156 |
breast | convnext | 78 | 156 |
breast | resnet50 | 78 | 156 |
breast | vit | 78 | 156 |
pneumonia | beit | 524 | 624 |
pneumonia | convnext | 524 | 624 |
pneumonia | resnet50 | 524 | 624 |
pneumonia | vit | 524 | 624 |
c10 | densenet40 | 5,000 | 10,000 |
c10 | lenet5 | 5,000 | 10,000 |
c10 | resnet_wide32 | 5,000 | 10,000 |
c10 | resnet110 | 5,000 | 10,000 |
c10 | resnet110_SD | 5,000 | 10,000 |
c100 | densenet40 | 5,000 | 10,000 |
c100 | lenet5 | 5,000 | 10,000 |
c100 | resnet_wide32 | 5,000 | 10,000 |
c100 | resnet110 | 5,000 | 10,000 |
c100 | resnet110_SD | 5,000 | 10,000 |
birds | resnet50 | 2,897 | 2,897 |
SVHN | resnet152_SD | 6,000 | 26,032 |
derma | beit | 1,003 | 2,005 |
derma | convnext | 1,003 | 2,005 |
derma | resnet50 | 1,003 | 2,005 |
derma | vit | 1,003 | 2,005 |
oct | beit | 10,832 | 1,000 |
oct | convnext | 10,832 | 1,000 |
oct | resnet50 | 10,832 | 1,000 |
oct | vit | 10,832 | 1,000 |
imagenet | densenet161 | 25,000 | 25,000 |
imagenet | resnet152 | 25,000 | 25,000 |
imagenet | beit | 10,000 | 40,000 |
imagenet | convnext | 10,000 | 40,000 |
imagenet | eva | 10,000 | 40,000 |
imagenet | resnet50 | 10,000 | 40,000 |
imagenet | swin | 10,000 | 40,000 |
imagenet | vit | 10,000 | 40,000 |
APSFailure | TabPFN-v2.6 | 50,666 | 25,334 |
Amazon_employee_access | TabPFN-v2.6 | 21,846 | 10,923 |
Bank_Customer_Churn | TabPFN-v2.6 | 6,666 | 3,334 |
Bioresponse | TabPFN-v2.6 | 2,500 | 1,251 |
Diabetes130US | TabPFN-v2.6 | 47,678 | 23,840 |
E-CommereShippingData | TabPFN-v2.6 | 7,332 | 3,667 |
Fitness_Club | TabPFN-v2.6 | 1,000 | 500 |
GiveMeSomeCredit | TabPFN-v2.6 | 100,000 | 50,000 |
HR_Analytics_Job_Change_of_Data_Scientists | TabPFN-v2.6 | 12,772 | 6,386 |
Is-this-a-good-customer | TabPFN-v2.6 | 1,148 | 575 |
Marketing_Campaign | TabPFN-v2.6 | 1,493 | 747 |
NATICUSdroid | TabPFN-v2.6 | 4,994 | 2,497 |
bank-marketing | TabPFN-v2.6 | 30,140 | 15,071 |
blood-transfusion-service-center | TabPFN-v2.6 | 498 | 250 |
churn | TabPFN-v2.6 | 3,333 | 1,667 |
coil2000_insurance_policies | TabPFN-v2.6 | 6,548 | 3,274 |
credit-g | TabPFN-v2.6 | 666 | 334 |
credit_card_clients_default | TabPFN-v2.6 | 20,000 | 10,000 |
customer_satisfaction_in_airline | TabPFN-v2.6 | 86,586 | 43,294 |
diabetes | TabPFN-v2.6 | 512 | 256 |
hazelnut-spread-contaminant-detection | TabPFN-v2.6 | 1,600 | 800 |
heloc | TabPFN-v2.6 | 6,972 | 3,487 |
in_vehicle_coupon_recommendation | TabPFN-v2.6 | 8,456 | 4,228 |
jm1 | TabPFN-v2.6 | 7,256 | 3,629 |
kddcup09_appetency | TabPFN-v2.6 | 33,333 | 16,667 |
online_shoppers_intention | TabPFN-v2.6 | 8,220 | 4,110 |
polish_companies_bankruptcy | TabPFN-v2.6 | 3,940 | 1,970 |
qsar-biodeg | TabPFN-v2.6 | 702 | 352 |
seismic-bumps | TabPFN-v2.6 | 1,722 | 862 |
taiwanese_bankruptcy_prediction | TabPFN-v2.6 | 4,546 | 2,273 |
APSFailure | TabICLv2 | 50,666 | 25,334 |
Amazon_employee_access | TabICLv2 | 21,846 | 10,923 |
Bank_Customer_Churn | TabICLv2 | 6,666 | 3,334 |
Bioresponse | TabICLv2 | 2,500 | 1,251 |
Diabetes130US | TabICLv2 | 47,678 | 23,840 |
E-CommereShippingData | TabICLv2 | 7,332 | 3,667 |
Fitness_Club | TabICLv2 | 1,000 | 500 |
GiveMeSomeCredit | TabICLv2 | 100,000 | 50,000 |
HR_Analytics_Job_Change_of_Data_Scientists | TabICLv2 | 12,772 | 6,386 |
Is-this-a-good-customer | TabICLv2 | 1,148 | 575 |
Marketing_Campaign | TabICLv2 | 1,493 | 747 |
NATICUSdroid | TabICLv2 | 4,994 | 2,497 |
bank-marketing | TabICLv2 | 30,140 | 15,071 |
blood-transfusion-service-center | TabICLv2 | 498 | 250 |
churn | TabICLv2 | 3,333 | 1,667 |
coil2000_insurance_policies | TabICLv2 | 6,548 | 3,274 |
credit-g | TabICLv2 | 666 | 334 |
credit_card_clients_default | TabICLv2 | 20,000 | 10,000 |
customer_satisfaction_in_airline | TabICLv2 | 86,586 | 43,294 |
diabetes | TabICLv2 | 512 | 256 |
hazelnut-spread-contaminant-detection | TabICLv2 | 1,600 | 800 |
heloc | TabICLv2 | 6,972 | 3,487 |
in_vehicle_coupon_recommendation | TabICLv2 | 8,456 | 4,228 |
jm1 | TabICLv2 | 7,256 | 3,629 |
kddcup09_appetency | TabICLv2 | 33,333 | 16,667 |
online_shoppers_intention | TabICLv2 | 8,220 | 4,110 |
polish_companies_bankruptcy | TabICLv2 | 3,940 | 1,970 |
qsar-biodeg | TabICLv2 | 702 | 352 |
seismic-bumps | TabICLv2 | 1,722 | 862 |
CalArena — Calibration Benchmark Dataset
CalArena is a large-scale benchmark for evaluating post-hoc calibration methods on classification models. It covers 7 benchmarks across tabular and computer vision domains, spanning hundreds of (dataset, model) pairs and three problem types (binary, multiclass and large scale multiclass).
Each entry in the benchmark is a (p_cal, y_cal, p_test, y_test) tuple — the calibration split and test split of predicted probabilities and ground-truth labels for one (dataset, model) pair.
Calibration methods are fitted on the calibration split and evaluated on the test split.
This dataset is the data companion to the CalArena code repository.
Files
| File | Description | Size |
|---|---|---|
tabrepo-binary.h5 |
Binary classification, classical tabular models | ~36 MB |
tabrepo-binary-experiments.csv |
Experiment index for tabrepo-binary |
< 1 MB |
tabarena-binary.h5 |
Binary classification, modern tabular foundation models | ~26 MB |
tabarena-binary-experiments.csv |
Experiment index for tabarena-binary |
< 1 MB |
cv-binary.h5 |
Binary classification, computer vision models | < 1 MB |
cv-binary-experiments.csv |
Experiment index for cv-binary |
< 1 MB |
tabrepo-multiclass.h5 |
Multiclass classification, classical tabular models | ~115 MB |
tabrepo-multiclass-experiments.csv |
Experiment index for tabrepo-multiclass |
< 1 MB |
tabarena-multiclass.h5 |
Multiclass classification, modern tabular foundation models | ~11 MB |
tabarena-multiclass-experiments.csv |
Experiment index for tabarena-multiclass |
< 1 MB |
cv-multiclass.h5 |
Multiclass classification, computer vision models | ~39 MB |
cv-multiclass-experiments.csv |
Experiment index for cv-multiclass |
< 1 MB |
imagenet-multiclass.h5 |
1000-class ImageNet, computer vision models | ~1.5 GB |
imagenet-multiclass-experiments.csv |
Experiment index for imagenet-multiclass |
< 1 MB |
Benchmark overview
| Benchmark | Problem type | Base models | # Datasets | # Experiments |
|---|---|---|---|---|
tabrepo-binary |
Binary | 8 | 104 tabular datasets | 832 |
tabarena-binary |
Binary | 11 | 30 tabular datasets | 314 |
cv-binary |
Binary | 9 | 3 (CIFAR-10†, Breast, Pneumonia) | 13 |
tabrepo-multiclass |
Multiclass | 8 | 65 tabular datasets | 520 |
tabarena-multiclass |
Multiclass | 11 | 8 tabular datasets | 84 |
cv-multiclass |
Multiclass | 10 | 6 (CIFAR-10, CIFAR-100, Birds, SVHN, Derma, OCT) | 20 |
imagenet-multiclass |
Large scale multiclass | 8 | 1 (ImageNet) | 8 |
† CIFAR-10 is converted to binary (Animal vs Machine) by marginalising over class groups.
Base models
TabRepo (classical tabular): CatBoost, ExtraTrees, LightGBM, LinearModel, NeuralNetFastAI, NeuralNetTorch, RandomForest, XGBoost. Source: TabRepo repository D244_F3_C1530_200.
Best hyperparameter configuration selected per (dataset, model, fold) by validation error.
TabArena (modern tabular): TabPFN-v2.6, TabICLv2, RealTabPFN-v2.5, TabICL_GPU, LimiX_GPU, TabM_GPU, RealMLP_GPU, BetaTabPFN_GPU, ModernNCA_GPU, Mitra_GPU, TabDPT_GPU. Models selected with ≥ 1300 ELO on the TabArena leaderboard (Classification, All Datasets, as of April 1 2026). Source: TabArena.
Computer vision: ResNet, DenseNet, WideResNet, ViT, BEiT, ConvNeXt, Swin, EVA, and others depending on the dataset. Logits sourced from two collections: NN_calibration and Beyond Overconfidence.
Data format
HDF5 files
Each .h5 file has the following structure:
{dataset}/
{model}/
probas_cal float32 (n_cal,) # positive-class probabilities [binary]
float32 (n_cal, n_classes) # class probabilities [multiclass]
labels_cal int32 (n_cal,)
probas_test float32 (n_test,) # same shape conventions as above
labels_test int32 (n_test,)
File-level attributes:
source—"tabrepo","tabarena","cv", or"imagenet"problem_type—"binary"or"multiclass"
All probabilities are valid (non-negative, sum to 1 for multiclass). Labels are 0-indexed integers.
Experiment CSV files
Each {benchmark}-experiments.csv lists one row per (dataset, model) pair:
| Column | Description |
|---|---|
dataset |
Dataset name (matches the HDF5 group key) |
model |
Model name (matches the HDF5 group key) |
cal_size |
Number of calibration samples |
test_size |
Number of test samples |
n_classes |
Number of classes (multiclass benchmarks only) |
tabrepo_fold / tabarena_fold |
Fold index used (TabRepo/TabArena benchmarks) |
tabrepo_config / tabarena_config |
Best hyperparameter configuration selected (TabRepo/TabArena) |
Loading the data
Python (h5py)
import h5py
import numpy as np
with h5py.File("tabrepo-binary.h5", "r") as f:
# List all (dataset, model) pairs
pairs = [(ds, mdl) for ds in f for mdl in f[ds]]
# Load a single experiment
grp = f["anneal/CatBoost"]
p_cal = grp["probas_cal"][:] # shape (n_cal,)
y_cal = grp["labels_cal"][:] # shape (n_cal,)
p_test = grp["probas_test"][:] # shape (n_test,)
y_test = grp["labels_test"][:] # shape (n_test,)
With the CalArena runner
The CalArena repository provides run_benchmark.py, which loads these files automatically and runs all calibrators:
# Place .h5 and .csv files under calibration_benchmarks/
python run_benchmark.py --benchmark tabrepo-binary
Dataset construction
Scripts that were used to generate the benchmarks files can be found in the CalArena repository.
Calibration / test split
For TabRepo and TabArena, the calibration split corresponds to the validation fold of the respective repository, and the test split is the held-out test set. This ensures no data leakage: the base model never sees the calibration set during training.
For computer vision datasets, the calibration and test splits are fixed partitions provided by the original data sources.
Excluded datasets
The following datasets were excluded due to errors in the upstream repositories:
- TabRepo binary: MiniBooNE
- TabRepo multiclass: jannis, kropt, shuttle
Intended use
This dataset is intended for:
- Benchmarking post-hoc calibration algorithms on diverse classification tasks
- Studying the relationship between model type, dataset characteristics, and calibration difficulty
- Developing new calibration methods with access to pre-computed probability estimates
License
The benchmark data is released under CC BY 4.0. Downstream datasets (OpenML, CIFAR, ImageNet, etc.) retain their original licenses; please consult the respective sources before redistribution.
Citation
@inproceedings{calarena2025,
title = {CalArena: A Large-Scale Benchmark for Post-Hoc Calibration},
author = {...},
booktitle = {...},
year = {2025},
}
- Downloads last month
- 14