calibration / README.md
WJHuang's picture
Update README.md
036a1ac verified
---
license: mit
task_categories:
- image-classification
- token-classification
language:
- en
pretty_name: Post-hoc Calibration Dataset
tags:
- deep learning
- neural network classifier
- classification calibration
- network calibration
- canonical calibration
- deep learning calibration
---
# Post-hoc Calibration Dataset
This repository contains datasets designed for evaluating and developing **post-hoc calibration** methods for deep neural network classifiers. Each dataset includes precomputed logits and labels, divided into clear training and test splits.
## Dataset Overview
Datasets provided here cover popular benchmark tasks, including CIFAR-10, CIFAR-100, SVHN, Stanford Cars (CARS), CUB-200 Birds (BIRDS), and ImageNet. Dataset composition here for post-hoc calibration are listed below:
| Dataset | # Classes | Training Set Size | Test Set Size |
|--------------|-----------|-------------------|---------------|
| CIFAR-10 | 10 | 5000 | 10000 |
| SVHN | 10 | 6000 | 26032 |
| CIFAR-100 | 100 | 5000 | 10000 |
| CARS | 196 | 4020 | 4020 |
| BIRDS | 200 | 2897 | 2897 |
| ImageNet | 1000 | 25000 | 25000 |
## Included Pre-trained Networks on Classification Datasets
Each `.p` file, which represents one calibration task, contains ground-truth labels and predicted logits from specific pre-trained neural network architectures, as listed below:
- **`probs_resnet110_c10_logits.p`**: ResNet110 on CIFAR-10
- **`probs_resnet_wide32_c10_logits.p`**: WideResNet32 on CIFAR-10
- **`probs_densenet40_c10_logits.p`**: DenseNet40 on CIFAR-10
- **`probs_resnet110_c100_logits.p`**: ResNet110 on CIFAR-100
- **`probs_resnet_wide32_c100_logits.p`**: WideResNet32 on CIFAR-100
- **`probs_densenet40_c100_logits.p`**: DenseNet40 on CIFAR-100
- **`probs_resnet152_SD_SVHN_logits.p`**: ResNet152 SD on SVHN
- **`probs_resnet50NTSNet_birds_logits.p`**: ResNet50NTSNet on BIRDS
- **`probs_resnet50_cars_logits.p`**: ResNet50 on CARS
- **`probs_resnet101scratch_cars_logits.p`**: ResNet101 from scratch on CARS
- **`probs_resnet101_cars_logits.p`**: ResNet101 on CARS (initialized with ImageNet weights)
- **`probs_densenet161_imgnet_logits.p`**: DenseNet161 on ImageNet
- **`probs_pnasnet5large_imgnet_logits.p`**: PNASNet5large on ImageNet
- **`probs_resnet152_imgnet_logits.p`**: ResNet152 on ImageNet
- **`probs_swintiny_imgnet_logits.p`**: Swin Transformer (tiny) on ImageNet
## Data Loading
Each dataset is stored as a Python pickle file (`.p`). Load the datasets with the following Python snippet:
```python
import pickle
with open('path_to_dataset.p', 'rb') as f:
(x_logits_train, y_train), (x_logits_test, y_test) = pickle.load(f)
```
- `x_logits_train`, `x_logits_test`: The logits (raw neural network outputs).
- `y_train`, `y_test`: Ground truth labels.
## Reference
More detailed description for the dataset can be found in the following paper:
```
Huang W, Cao G, Xia J, Chen J, Wang H, Zhang J. h-calibration: Rethinking Classifier Recalibration with Probabilistic Error-Bounded Objective[J].
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025.
```
The official GitHub implementation of the above `H-Calibration` study using the dataset is accessible here:
- [h-Calibration GitHub Repository](https://github.com/WenjianHuang93/h-Calibration)