FigAgent / 2002.09437 /paper_text /intro_method.md
Eric03's picture
Add files using upload-large-folder tool
93d003a verified

Method

Let $D = \langle(\bm{\mathrm{x}}i, y_i)\rangle{i=1}^N$ denote a dataset consisting of $N$ samples from a joint distribution $\mathcal{D}(\mathcal{X}, \mathcal{Y})$, where for each sample $i$, $\mathbf{x}i \in \mathcal{X}$ is the input and $y_i \in \mathcal{Y} = {1, 2, ..., K}$ is the ground-truth class label. Let $\hat{p}{i,y} = f_\theta(y|\bm{\mathrm{x}}_i)$ be the probability that a neural network $f$ with model parameters $\theta$ predicts for a class $y$ on a given input $\bm{\mathrm{x}}i$. The class that $f$ predicts for $\mathbf{x}i$ is computed as $\hat{y}i = \mathrm{argmax}{y \in \mathcal{Y}} ; \hat{p}{i,y}$, and the predicted confidence as $\hat{p}i = \mathrm{max}{y \in \mathcal{Y}} ; \hat{p}{i,y}$. The network is said to be perfectly calibrated when, for each sample $(\bm{\mathrm{x}}, y) \in D$, the confidence $\hat{p}$ is equal to the model accuracy $\mathbb{P}(\hat{y} = y | \hat{p})$, i.e. the probability that the predicted class is correct. For instance, of all the samples to which a perfectly calibrated neural network assigns a confidence of $0.8$, $80%$ should be correctly predicted.

A popular metric used to measure model calibration is the expected calibration error (ECE) [@Naeini2015], defined as the expected absolute difference between the model's confidence and its accuracy, i.e. $\mathbb{E}{\hat{p}} \big[ \left| \mathbb{P}(\hat{y} = y | \hat{p}) - \hat{p} \right| \big]$. Since we only have finite samples, the ECE cannot in practice be computed using this definition. Instead, we divide the interval $[0,1]$ into $M$ equispaced bins, where the $i^{\mathrm{th}}$ bin is the interval $\left(\frac{i-1}{M}, \frac{i}{M} \right]$. Let $B_i$ denote the set of samples with confidences belonging to the $i^{\mathrm{th}}$ bin. The accuracy $A_i$ of this bin is computed as $A_i = \frac{1}{|B_i|} \sum{j \in B_i} \mathbbm{1} \left(\hat{y}_j = y_j\right)$, where $\mathbbm{1}$ is the indicator function, and $\hat{y}j$ and $y_j$ are the predicted and ground-truth labels for the $j^{\mathrm{th}}$ sample. Similarly, the confidence $C_i$ of the $i^{\mathrm{th}}$ bin is computed as $C_i = \frac{1}{|B_i|} \sum{j \in B_i} \hat{p}j$, i.e. $C_i$ is the average confidence of all samples in the bin. The ECE can be approximated as a weighted average of the absolute difference between the accuracy and confidence of each bin: $\mathrm{ECE} = \sum{i=1}^{M} \frac{|B_i|}{N} \left| A_i - C_i \right|$.

A similar metric, the maximum calibration error (MCE) [@Naeini2015], is defined as the maximum absolute difference between the accuracy and confidence of each bin: $\mathrm{MCE} = \mathrm{max}_{i \in {1, ..., M}}\left|A_i - C_i\right|$.

AdaECE: One disadvantage of ECE is the uniform bin width. For a trained model, most of the samples lie within the highest confidence bins, and hence these bins dominate the value of the ECE. We thus also consider another metric, AdaECE (Adaptive ECE), for which bin sizes are calculated so as to evenly distribute samples between bins (similar to the adaptive binning procedure in [@Nguyen2015posterior]): $\mathrm{AdaECE} = \sum_{i=1}^{M} \frac{|B_i|}{N} \left| A_i - C_i \right| \text{ s.t.\ } \forall i, j \cdot |B_i| = |B_j|$.

Classwise-ECE: The ECE metric only considers the probability of the predicted class, without considering the other scores in the softmax distribution. A stronger definition of calibration would require the probabilities of all the classes in the softmax distribution to be calibrated [@Kull2019beyond; @Vaicenavicius2019; @Widmann2019calibration; @Kumar2019verified]. This can be achieved with a simple classwise extension of the ECE metric: $\mathrm{Classwise ECE} = \frac{1}{K} \sum_{i=1}^{M}\sum_{j=1}^{K} \frac{|B_{i,j}|}{N} \left| A_{i,j} - C_{i,j} \right|$, where $K$ is the number of classes, $B_{ij}$ denotes the set of samples from the $j^{th}$ class in the $i^{th}$ bin, $A_{ij} = \frac{1}{|B_{ij}|} \sum_{k \in B_{ij}} \mathbbm{1} \left(j = y_k\right)$ and $C_{i,j} = \frac{1}{|B_{ij}|} \sum_{k \in B_{ij}} \hat{p}_{kj}$.

A common way of visualising calibration is to use a reliability plot [@Niculescu2005], which plots the accuracies of the confidence bins as a bar chart (see Appendix Figure 6{reference-type="ref" reference="fig:rel_conf_bin_plot"}). For a perfectly calibrated model, the accuracy for each bin matches the confidence, and hence all of the bars lie on the diagonal. By contrast, if most of the bars lie above the diagonal, the model is more accurate than it expects, and is under-confident, and if most of the bars lie below the diagonal, then it is over-confident.

We now discuss why high-capacity neural networks, despite achieving low classification errors on well-known datasets, tend to be miscalibrated. A key empirical observation made by [@Guo2017] was that poor calibration of such networks appears to be linked to overfitting on the negative log-likelihood (NLL) during training. In this section, we further inspect this observation to provide new insights.

For the analysis, we train a ResNet-50 network on CIFAR-10 with state-of-the-art performance settings [@PyTorchCIFAR]. We use Stochastic Gradient Descent (SGD) with a mini-batch of size 128, momentum of 0.9, and learning rate schedule of ${0.1, 0.01, 0.001}$ for the first 150, next 100, and last 100 epochs, respectively. We minimise cross-entropy loss (a.k.a. NLL) $\mathcal{L}c$, which, in a standard classification context, is $-\log \hat{p}{i,y_i}$, where $\hat{p}{i,y_i}$ is the probability assigned by the network to the correct class $y_i$ for the i$^{th}$ sample. Note that the NLL is minimised when for each training sample $i$, $\hat{p}{i,y_i} = 1$, whereas the classification error is minimised when $\hat{p}{i,y_i} > \hat{p}{i,y}$ for all $y \neq y_i$. This indicates that even when the classification error is $0$, the NLL can be positive, and the optimisation algorithm can still try to reduce it to $0$ by further increasing the value of $\hat{p}_{i,y_i}$ for each sample (see Appendix 8{reference-type="ref" reference="rel_plots_appendix"}).

To study how miscalibration occurs during training, we plot the average NLL for the train and test sets at each training epoch in Figures 1{reference-type="ref" reference="fig:nll_entropy_ece"}(a) and 1{reference-type="ref" reference="fig:nll_entropy_ece"}(b). We also plot the average NLL and the entropy of the softmax distribution produced by the network for the correctly and incorrectly classified samples. In Figure 1{reference-type="ref" reference="fig:nll_entropy_ece"}(c), we plot the classification errors on the train and test sets, along with the test set ECE.

Metrics related to calibration plotted whilst training a ResNet-50 network on CIFAR-10.

Curse of misclassified samples: Figures 1{reference-type="ref" reference="fig:nll_entropy_ece"}(a) and 1{reference-type="ref" reference="fig:nll_entropy_ece"}(b) show that although the average train NLL (for both correctly and incorrectly classified training samples) broadly decreases throughout training, after the $150^{th}$ epoch (where the learning rate drops by a factor of $10$), there is a marked rise in the average test NLL, indicating that the network starts to overfit on average NLL. This increase in average test NLL is caused only by the incorrectly classified samples, as the average NLL for the correctly classified samples continues to decrease even after the $150^{th}$ epoch. We also observe that after epoch $150$, the test set ECE rises, indicating that the network is becoming miscalibrated. This corroborates the observation in [@Guo2017] that miscalibration and NLL overfitting are linked.

Peak at the wrong place: We further observe that the entropies of the softmax distributions for both the correctly and incorrectly classified test samples decrease throughout training (in other words, the distributions get peakier). This observation, coupled with the one we made above, indicates that for the wrongly classified test samples, the network gradually becomes more and more confident about its incorrect predictions.

Weight magnification: The increase in confidence of the network's predictions can happen if the network increases the norm of its weights $W$ to increase the magnitudes of the logits. In fact, cross-entropy loss is minimised when for each training sample $i$, $\hat{p}_{i,y_i} = 1$, which is possible only when $||W|| \to \infty$. Cross-entropy loss thus inherently induces this tendency of weight magnification in neural network optimisation. The promising performance of weight decay [@Guo2017] (regulating the norm of weights) on the calibration of neural networks can perhaps be explained using this. This increase in the network's confidence during training is one of the key causes of miscalibration.

As discussed in §3{reference-type="ref" reference="sec:cause_cali"}, overfitting on NLL, which is observed as the network grows more confident on all of its predictions irrespective of their correctness, is strongly related to poor calibration. One cause of this is that the cross-entropy objective minimises the difference between the softmax distribution and the ground-truth one-hot encoding over an entire mini-batch, irrespective of how well a network classifies individual samples in the mini-batch. In this work, we study an alternative loss function, popularly known as focal loss [@Lin2017], that tackles this by weighting loss components generated from individual samples in a mini-batch by how well the model classifies them. For classification tasks where the target distribution is a one-hot encoding, it is defined as $\mathcal{L}f = -(1 - \hat{p}{i,y_i})^\gamma \log \hat{p}_{i,y_i}$, where $\gamma$ is a user-defined hyperparameter[^2].

Why might focal loss improve calibration? We know that cross-entropy forms an upper bound on the KL-divergence between the target distribution $q$ and the predicted distribution $\hat{p}$, i.e. $\mathcal{L}c \geq \mathrm{KL}(q||\hat{p})$, so minimising cross-entropy results in minimising $\mathrm{KL}(q||\hat{p})$. Interestingly, a general form of focal loss can be shown to be an upper bound on the regularised KL-divergence, where the regulariser is the negative entropy of the predicted distribution $\hat{p}$, and the regularisation parameter is $\gamma$, the hyperparameter of focal loss (a proof of this can be found in Appendix 9{reference-type="ref" reference="reg_bregman"}): $$\begin{equation} \label{eq:reg_bregman} \mathcal{L}f \geq \mathrm{KL}(q||\hat{p})- \gamma\mathbb{H}[\hat{p}]. \end{equation}$$ The most interesting property of this upper bound is that it shows that replacing cross-entropy with focal loss has the effect of adding a maximum-entropy regulariser [@Pereyra2017] to the implicit minimisation that was previously being performed. In other words, trying to minimise focal loss minimises the KL divergence between $\hat{p}$ and $q$, whilst simultaneously increasing the entropy of the predicted distribution $\hat{p}$. Note, in the case of ground truth with one-hot encoding, only the component of the entropy of $\hat{p}$ corresponding to the ground-truth index, $\gamma (-\hat{p}{i,y_i} \log \hat{p}{i,y_i})$, will be maximised (refer Appendix 9{reference-type="ref" reference="reg_bregman"}). Encouraging the predicted distribution to have higher entropy can help avoid the overconfident predictions produced by DNNs (see the 'Peak at the wrong place' paragraph of §3{reference-type="ref" reference="sec:cause_cali"}), and thereby improve calibration.

How metrics related to model calibration change whilst training several ResNet-50 networks on CIFAR-10, using either cross-entropy loss, or focal loss with γ set to 1, 2 or 3.

Empirical observations: To analyse the behaviour of neural networks trained on focal loss, we use the same framework as mentioned above, and train four ResNet-50 networks on CIFAR-10, one using cross-entropy loss, and three using focal loss with $\gamma = 1, 2$ and $3$. Figure 2{reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(a) shows that the test NLL for the cross-entropy model significantly increases towards the end of training (before saturating), whereas the NLLs for the focal loss models remain low. To better understand this, we analyse the behaviour of these models for correctly and incorrectly classified samples. Figure 2{reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(b) shows that even though the NLLs for the correctly classified samples broadly-speaking decrease over the course of training for all the models, the NLLs for the focal loss models remain consistently higher than that for the cross-entropy model throughout training, implying that the focal loss models are relatively less confident than the cross-entropy model for samples that they predict correctly. This is important, as we have already discussed that it is overconfidence that normally makes deep neural networks miscalibrated. Figure 2{reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(c) shows that in contrast to the cross-entropy model, for which the NLL for misclassified test samples increases significantly after epoch $150$, the rise in this value for the focal loss models is much less severe. Additionally, in Figure 2{reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(d), we notice that the entropy of the softmax distribution for misclassified test samples is consistently (if marginally) higher for focal loss than for cross-entropy (consistent with Equation [eq:reg_bregman]{reference-type="ref" reference="eq:reg_bregman"}).

Note that from Figure 2{reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(a), one may think that applying early stopping when training a model on cross-entropy can provide better calibration scores. However, there is no ideal way of doing early stopping that provides the best calibration error and the best test set accuracy. For fair comparison, we chose $3$ intermediate models for each loss function with the best val set ECE, NLL and accuracy, and observed that: a) for every stopping criterion, focal loss outperforms cross-entropy in both test set accuracy and ECE, b) when using val set ECE as a stopping criterion, the intermediate model for cross-entropy indeed improves its test set ECE, but at the cost of a significantly higher test error. Please refer to Appendix 17{reference-type="ref" reference="sec:early_stopping"} for more details.

As per §3{reference-type="ref" reference="sec:cause_cali"}, an increase in the test NLL and a decrease in the test entropy for misclassified samples, along with no corresponding increase in the test NLL for the correctly classified samples, can be interpreted as the network starting to predict softmax distributions for the misclassified samples that are ever more peaky in the wrong place. Notably, our results in Figures 2{reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(b), 2{reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(c) and 2{reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(d) clearly show that this effect is significantly reduced when training with focal loss rather than cross-entropy, leading to a better-calibrated network whose predictions are less peaky in the wrong place.

Theoretical justification: As mentioned previously, once a model trained using cross-entropy reaches high training accuracy, the optimiser may try to further reduce the training NLL by increasing the confidences for the correctly classified samples. It may achieve this by magnifying the network weights to increase the magnitudes of the logits. To verify this hypothesis, we plot the $L_2$ norm of the weights of the last linear layer for all four networks as a function of the training epoch (see Figure 2{reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(e)). Notably, although the norms of the weights for the models trained on focal loss are initially higher than that for the cross-entropy model, a complete reversal in the ordering of the weight norms occurs between epochs $150$ and $250$. In other words, as the networks start to become miscalibrated, the weight norm for the cross-entropy model also starts to become greater than those for the focal loss models. In practice, this is because focal loss, by design, starts to act as a regulariser on the network's weights once the model has gained a certain amount of confidence in its predictions. This behaviour of focal loss can be observed even on a much simpler setup like a linear model (see Appendix 10{reference-type="ref" reference="linear_model"}). To better understand this, we start by considering the following proposition (proof in Appendix 11{reference-type="ref" reference="sec:proof"}):

::: {#pro1 .pro} Proposition 1. For focal loss $\mathcal{L}_f$ and cross-entropy $\mathcal{L}_c$, the gradients $\frac{\partial \mathcal{L}_f}{\partial \mathbf{w}} = \frac{\partial \mathcal{L}c}{\partial \mathbf{w}} g(\hat{p}{i,y_i}, \gamma)$, where $g(p, \gamma) = (1-p)^\gamma - \gamma p (1-p)^{\gamma - 1} \log(p)$, $\gamma \in \mathbb{R}^+$ is the focal loss hyperparameter, and $\mathbf{w}$ denotes the parameters of the last linear layer. Thus $\left\lVert\frac{\partial \mathcal{L}_f}{\partial \mathbf{w}}\right\rVert \leq \left\lVert\frac{\partial \mathcal{L}c}{\partial \mathbf{w}}\right\rVert$ if $g(\hat{p}{i,y_i}, \gamma) \in [0, 1]$. :::

Proposition 1{reference-type="ref" reference="pro1"} shows the relationship between the norms of the gradients of the last linear layer for focal loss and cross-entropy loss, for the same network architecture. Note that this relation depends on a function $g(p, \gamma)$, which we plot in Figure 3{reference-type="ref" reference="fig:g_pt_grad_norms"}(a) to understand its behaviour. It is clear that for every $\gamma$, there exists a (different) threshold $p_0$ such that for all $p \in [0,p_0]$, $g(p,\gamma) \ge 1$, and for all $p \in (p_0, 1]$, $g(p,\gamma) < 1$. (For example, for $\gamma = 1$, $p_0 \approx 0.4$.) We use this insight to further explain why focal loss provides implicit weight regularisation.

(a): g(p, γ) vs. p and (b-d): histograms of the gradient norms of the last linear layer for both cross-entropy and focal loss.

Implicit weight regularisation: For a network trained using focal loss with a fixed $\gamma$, during the initial stages of the training, when $\hat{p}{i,y_i} \in (0,p_0)$, $g(\hat{p}{i,y_i}, \gamma) > 1$. This implies that the confidences of the focal loss model's predictions will initially increase faster than they would for cross-entropy. However, as soon as $\hat{p}{i,y_i}$ crosses the threshold $p_0$, $g(\hat{p}{i,y_i}, \gamma)$ falls below $1$ and reduces the size of the gradient updates made to the network weights, thereby having a regularising effect on the weights. This is why, in Figure 2{reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(e), we find that the weight norms of the models trained with focal loss are initially higher than that for the model trained using cross-entropy. However, as training progresses, we find that the ordering of the weight norms reverses, as focal loss starts regularising the network weights. Moreover, we can draw similar insights from Figures 3{reference-type="ref" reference="fig:g_pt_grad_norms"}(b), 3{reference-type="ref" reference="fig:g_pt_grad_norms"}(c) and 3{reference-type="ref" reference="fig:g_pt_grad_norms"}(d), in which we plot histograms of the gradient norms of the last linear layer (over all samples in the training set) at epochs $10$, $100$ and $200$, respectively. At epoch $10$, the gradient norms for cross-entropy and focal loss are similar, but as training progresses, those for cross-entropy decrease less rapidly than those for focal loss, indicating that the gradient norms for focal loss are consistently lower than those for cross-entropy throughout training.

Finally, observe in Figure 3{reference-type="ref" reference="fig:g_pt_grad_norms"}(a) that for higher $\gamma$ values, the fall in $g(p,\gamma)$ is steeper. We would thus expect a greater weight regularisation effect for models that use higher values of $\gamma$. This explains why, of the three models that we trained using focal loss, the one with $\gamma = 3$ outperforms (in terms of calibration) the one with $\gamma = 2$, which in turn outperforms the model with $\gamma = 1$. Based on this observation, one might think that, in general, a higher value of gamma would lead to a more calibrated model. However, this is not the case, as we notice from Figure 3{reference-type="ref" reference="fig:g_pt_grad_norms"}(a) that for $\gamma \ge 7$, $g(p,\gamma)$ reduces to nearly $0$ for a relatively low value of $p$ (around $0.5$). As a result, using values of $\gamma$ that are too high will cause the gradients to die (i.e. reduce to nearly $0$) early, at a point at which the network's predictions remain ambiguous, thereby causing the training process to fail.

How to choose $\gamma$: As discussed, focal loss provides implicit entropy and weight regularisation, which heavily depend on the value of $\gamma$. Finding an appropriate $\gamma$ is normally done using cross-validation. Also, traditionally, $\gamma$ is fixed for all samples in the dataset. However, as shown, the regularisation effect for a sample $i$ depends on $\hat{p}{i,y_i}$, i.e. the predicted probability for the ground truth label for the sample. It thus makes sense to choose $\gamma$ for each sample based on the value of $\hat{p}{i,y_i}$. To this end, we provide Proposition 2{reference-type="ref" reference="pro:gamma"} (proof in Appendix 11{reference-type="ref" reference="sec:proof"}), which we use to find a solution to this problem:

::: {#pro:gamma .pro} Proposition 2. Given a $p_0$, for $1 \geq p \geq p_0 > 0$, $g(p, \gamma) \leq 1$ for all $\gamma \geq \gamma^ = \frac{a}{b} + \frac{1}{\log a}W_{-1} \big(-\frac{a^{(1-a/b)}}{b} \log a \big)$, where $a = 1-p_0$, $b = p_0 \log p_0$, and $W_{-1}$ is the Lambert-W function [@corless1996lambertw]. Moreover, for $p \geq p_0 > 0$ and $\gamma \geq \gamma^*$, the equality $g(p, \gamma) = 1$ holds only for $p = p_0$ and $\gamma = \gamma^*$.* :::

It is worth noting that there exist multiple values of $\gamma$ where $g(p, \gamma) \leq 1$ for all $p \geq p_0$. For a given $p_0$, Proposition 2{reference-type="ref" reference="pro:gamma"} allows us to compute $\gamma$ s.t. (i) $g(p_0,\gamma) = 1$; (ii) $g(p, \gamma) \ge 1$ for $p \in [0,p_0)$; and (iii) $g(p, \gamma) < 1$ for $p \in (p_0,1]$. This allows us to control the magnitude of the gradients for a particular sample $i$ based on the current value of $\hat{p}{i,y_i}$, and gives us a way of obtaining an informed value of $\gamma$ for each sample. For instance, a reasonable policy might be to choose $\gamma$ s.t. $g(\hat{p}_{i,y_i}, \gamma) > 1$ if $\hat{p}{i,y_i}$ is small (say less than $0.25$), and  $g(\hat{p}_{i,y_i}, \gamma) < 1$ otherwise. Such a policy will have the effect of making the weight updates larger for samples having a low predicted probability for the correct class and smaller for samples with a relatively higher predicted probability for the correct class.

Following the aforementioned arguments, we choose a threshold $p_0$ of $0.25$, and use Proposition 2{reference-type="ref" reference="pro:gamma"} to obtain a $\gamma$ policy such that $g(p, \gamma)$ is observably greater than $1$ for $p \in [0, 0.25)$ and $g(p, \gamma) < 1$ for $p \in (0.25, 1]$. In particular, we use the following schedule: if $\hat{p}{i,y_i} \in [0,0.2)$, then $\gamma = 5$, otherwise $\gamma = 3$ (note that $g(0.2, 5) \approx 1$ and $g(0.25, 3) \approx 1$: see Figure 3{reference-type="ref" reference="fig:g_pt_grad_norms"}(a)). We find this $\gamma$ policy to perform consistently well across multiple classification datasets and network architectures. Having said that, one can calculate multiple such schedules for $\gamma$ following Proposition 2{reference-type="ref" reference="pro:gamma"}, using the intuition of having a relatively high $\gamma$ for low values of $\hat{p}{i, y_i}$ and a relatively low $\gamma$ for high values of $\hat{p}_{i, y_i}$.