| license: mit | |
| datasets: | |
| - ILSVRC/imagenet-1k | |
| tags: | |
| - uncertainty quantification | |
| - model robustness | |
| - selective classification | |
| - label-smoothing | |
| [](https://arxiv.org/abs/2403.14715) | |
| This repository contains the models trained as experimental support for the paper "Towards Understanding Why Label Smoothing Degrades Selective Classification and How to Fix It" published at ICLR 2025. | |
| The code is based on [TorchUncertainty](https://github.com/ENSTA-U2IS-AI/torch-uncertainty) and available on [GitHub](https://github.com/o-laurent/Label-smoothing-Selective-classification). | |
| ## List of models | |
| This repository contains: | |
| - for classification on ImageNet with ViTs: 4 ViTs-S/16 trained with label-smoothing coefficients in [0, 0.1, 0.2, 0.3] | |
| - for classification on ImageNet with ResNets: 4 ResNet-50 trained with label-smoothing coefficients in [0, 0.1, 0.2, 0.3] | |
| - for classification on CIFAR-100: 4 DenseNet-BC trained with label-smoothing coefficients in [0, 0.1, 0.2, 0.3] | |
| - for segmentation: 4 DeepLabv3+ Resnet-101 trained with label-smoothing coefficients in [0, 0.1, 0.2, 0.3] | |
| - for nlp: one CE-based and one LS-based (LS coefficient 0.6) LSTM-MLP | |
| The rest of the models (notably on tabular data) used in the paper are trainable on CPU in the dedicated notebooks. |