Add 1 files
Browse files- 2408/2408.03164.md +291 -0
2408/2408.03164.md
ADDED
|
@@ -0,0 +1,291 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: Dilated Convolution with Learnable Spacings makes visual models more aligned with humans: a Grad-CAM study
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2408.03164
|
| 4 |
+
|
| 5 |
+
Published Time: Wed, 07 Aug 2024 00:38:30 GMT
|
| 6 |
+
|
| 7 |
+
Markdown Content:
|
| 8 |
+
Ismail Khalfaoui-Hassani 3 TimothΓ©e Masquelier 2,3
|
| 9 |
+
|
| 10 |
+
1 LIS, CNRS.
|
| 11 |
+
|
| 12 |
+
2 CerCo UMR 5549, CNRS. 3 UniversitΓ© de Toulouse, France.
|
| 13 |
+
|
| 14 |
+
rabih.chamas@lis-lab.fr, ismail.khalfaoui-hassani@univ-tlse3.fr, timothee.masquelier@cnrs.fr
|
| 15 |
+
|
| 16 |
+
###### Abstract
|
| 17 |
+
|
| 18 |
+
Dilated Convolution with Learnable Spacing (DCLS) is a recent advanced convolution method that allows enlarging the receptive fields (RF) without increasing the number of parameters, like the dilated convolution, yet without imposing a regular grid. DCLS has been shown to outperform the standard and dilated convolutions on several computer vision benchmarks. Here, we show that, in addition, DCLS increases the modelsβ interpretability, defined as the alignment with human visual strategies. To quantify it, we use the Spearman correlation between the modelsβ Grad-CAM heatmaps and the ClickMe dataset heatmaps, which reflect human visual attention. We took eight reference models β ResNet50, ConvNeXt (T, S and B), CAFormer, ConvFormer, and FastViT (sa_24 and 36) β and drop-in replaced the standard convolution layers with DCLS ones. This improved the interpretability score in seven of them. Moreover, we observed that Grad-CAM generated random heatmaps for two models in our study: CAFormer and ConvFormer models, leading to low interpretability scores. We addressed this issue by introducing Threshold-Grad-CAM, a modification built on top of Grad-CAM that enhanced interpretability across nearly all models. The code and checkpoints to reproduce this study are available at: [https://github.com/rabihchamas/DCLS-GradCAM-Eval](https://github.com/rabihchamas/DCLS-GradCAM-Eval)
|
| 19 |
+
|
| 20 |
+
1 Introduction
|
| 21 |
+
--------------
|
| 22 |
+
|
| 23 |
+
Deep learning neural networks are extremely powerful for a myriad of tasks, including image classification. However, despite being very powerful tools, they remain black box models, and understanding how they arrive at their results can be a major challenge. Explainability methods in deep learning aim to explain why a particular model predicts a particular result.
|
| 24 |
+
|
| 25 |
+
One application where explainability methods have been successfully in use for several years is image classification. The most popular and successful models nowadays for image classification include convolution and/or attention layers.
|
| 26 |
+
|
| 27 |
+
When a model contains only convolutions, it is called a fully convolutional neural network or CNN, when it contains only multi-head self-attention (MHSA) layers, it is called a transformer or, in the context of computer vision, a vision transformer, and when a model contains both layers, it is called a hybrid model.
|
| 28 |
+
|
| 29 |
+
Despite their very high accuracy, most of these models remain very opaque, and the lack of explicability of the latter, especially in computer vision, raises concerns about trust, fairness, and interoperability, hindering their adoption in sensitive areas such as medical diagnosis (Collenne et al., [2024](https://arxiv.org/html/2408.03164v1#bib.bib2)) or autonomous vehicles.
|
| 30 |
+
|
| 31 |
+
This also applies to recent advances such as Dilated Convolution with Learnable Spacings (DCLS) (Khalfaoui-Hassani et al., [2023b](https://arxiv.org/html/2408.03164v1#bib.bib11)), which shows promising performance gains in tasks such as image classification, segmentation, and object detection.
|
| 32 |
+
|
| 33 |
+
While the accuracy of DCLS is encouraging, its black-box nature demands attention. Thus, we have been motivated to explore explainability measures and scores specifically for DCLS, with the hope of shedding light on its underlying decision-making processes.
|
| 34 |
+
|
| 35 |
+
The taxonomies used for explainability in the artificial intelligence field of research are diverse and constantly evolving as new approaches are discovered. However, a common way to proceed is to distinguish between two major families of model explainability methods: global methods and local methods (Speith, [2022](https://arxiv.org/html/2408.03164v1#bib.bib21); Schwalbe and Finzel, [2023](https://arxiv.org/html/2408.03164v1#bib.bib19)).
|
| 36 |
+
|
| 37 |
+
Global methods describe the overall behavior of the model, considering general patterns and the importance of features. Some examples include Partial Dependence Plots (PDPs) (Friedman, [2001](https://arxiv.org/html/2408.03164v1#bib.bib7)) and SHapley Additive explanations (SHAP) (Lundberg and Lee, [2017](https://arxiv.org/html/2408.03164v1#bib.bib15)). Local methods, on the other hand, focus on explaining individual predictions, focusing on why the model made a particular decision for a particular input. Examples include Local Interpretable Model-Agnostic Explanations (LIME) (Ribeiro et al., [2016](https://arxiv.org/html/2408.03164v1#bib.bib17)) and Gradient-weighted Class Activation Mapping (Grad-CAM) (Selvaraju and et al., [2017](https://arxiv.org/html/2408.03164v1#bib.bib20)).
|
| 38 |
+
|
| 39 |
+
Grad-CAM is a popular method that helps visualize which parts of an image are most important to the modelβs decision. For the needs of our study, we designed a new explainability method based on Grad-CAM that we called Thershold-Grad-CAM. This new explainability method overcomes some issues tied to the failure of traditional Grad-CAM, in particular, for ConvFormer and CAFormer architectures (Yu et al., [2023](https://arxiv.org/html/2408.03164v1#bib.bib25)).
|
| 40 |
+
|
| 41 |
+
The objective of this paper is, on the one hand, to perform a comparative study in terms of explainability scores between recent state-of-the-art models in computer vision, namely ConvNeXt (Liu et al., [2022](https://arxiv.org/html/2408.03164v1#bib.bib14)), ConvFormer (Yu et al., [2023](https://arxiv.org/html/2408.03164v1#bib.bib25)), CAFormer (Yu et al., [2023](https://arxiv.org/html/2408.03164v1#bib.bib25)), and FastViT (Vasu et al., [2023](https://arxiv.org/html/2408.03164v1#bib.bib22)) in their original form, and, on the other hand, to perform the same study between these same models and their DCLS-enhanced counterparts.
|
| 42 |
+
|
| 43 |
+
What motivated the comparative study presented here in this paper is the qualitative similarity we noticed between human attention heatmaps obtained from the ClickMe dataset (Linsley et al., [2019](https://arxiv.org/html/2408.03164v1#bib.bib13)) and those obtained by models empowered with DCLS. Figure[1](https://arxiv.org/html/2408.03164v1#S1.F1 "Figure 1 β£ 1 Introduction β£ Dilated Convolution with Learnable Spacings makes visual models more aligned with humans: a Grad-CAM study") gives an overview of this similarity based on the ConvNeXt-B model. The images presented in the figure[1](https://arxiv.org/html/2408.03164v1#S1.F1 "Figure 1 β£ 1 Introduction β£ Dilated Convolution with Learnable Spacings makes visual models more aligned with humans: a Grad-CAM study") were selected from the ClickMe dataset. To help illustrate our point, we have selected a few images where the heatmaps are visually relevant. We then quantitatively confirmed this remarkable alignment of DCLS models and human attention heatmaps through a rigorous study of the Spearman correlation (Zar, [2005](https://arxiv.org/html/2408.03164v1#bib.bib26)) between heatmaps generated by Threshold-Grad-CAM and heatmaps made by human participants in the ClickMe dataset.
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
|
| 47 |
+
Figure 1: Visualization of Heatmaps on ClickMe dataset Images. First row: original images from the ClickMe dataset. Second row: the same images superimposed with heatmaps created by humans from the ClickMe project. Third row: Threshold-GradCAM heatmaps of the ConvNeXt base model enhanced with DCLS. Fourth row: Threshold-GradCAM heatmaps of the baseline ConvNeXt base model without DCLS.
|
| 48 |
+
|
| 49 |
+
We will refer to a model empowered with DCLS by its original name followed by the suffix: β_dclsβ. We create a DCLS-empowered model by performing a drop-in replacement of all the depthwise separable convolutions of this model (Chollet, [2017](https://arxiv.org/html/2408.03164v1#bib.bib1); Sandler et al., [2018](https://arxiv.org/html/2408.03164v1#bib.bib18)) with DCLS ones.
|
| 50 |
+
|
| 51 |
+
DCLS was introduced in Khalfaoui-Hassani et al. ([2023b](https://arxiv.org/html/2408.03164v1#bib.bib11)), where it exhibited better performance than the depthwise separable convolution and the dilated convolution (Yu and Koltun, [2015](https://arxiv.org/html/2408.03164v1#bib.bib24)) for computer vision tasks such as image classification, semantic segmentation, and object detection, as well as for computer audition tasks such as audio classification (Khalfaoui-Hassani et al., [2023a](https://arxiv.org/html/2408.03164v1#bib.bib10)). Initially, the DCLS method used bilinear interpolation, in Khalfaoui-Hassani et al. ([2023c](https://arxiv.org/html/2408.03164v1#bib.bib12)) this interpolation was extended to Gaussian. DCLS has focused on learning the positions of kernel elements along their weights. We believe this advance is important for tasks that require a nuanced understanding of visual context, similar to human perception.
|
| 52 |
+
|
| 53 |
+
2 Methods
|
| 54 |
+
---------
|
| 55 |
+
|
| 56 |
+
### 2.1 ClickMe dataset
|
| 57 |
+
|
| 58 |
+
To quantitatively evaluate the interpretability of the models, we employed the ClickMe dataset (Linsley et al., [2019](https://arxiv.org/html/2408.03164v1#bib.bib13)), which was introduced to capture human attention strategies in classification tasks. The dataset collection process involved a single-player online game, ClickMe.ai (Linsley et al., [2019](https://arxiv.org/html/2408.03164v1#bib.bib13)), where players identified the most informative parts of an image for object recognition. The alignment of model-generated heatmaps with those from the ClickMe dataset measures how closely a modelβs attention strategy mirrors human strategy.
|
| 59 |
+
|
| 60 |
+
### 2.2 DCLS method
|
| 61 |
+
|
| 62 |
+
Although larger convolution kernels can improve performance, increasing the kernel size increases the number of parameters and computational cost. Yu and Koltun ([2015](https://arxiv.org/html/2408.03164v1#bib.bib24)) introduced dilated convolution (DC) to expand the kernel without increasing parameters. DC inserts zeros between kernel elements, effectively enlarging the kernel without adding new weights. However, DC uses a fixed grid, which can limit performance.
|
| 63 |
+
|
| 64 |
+
Khalfaoui-Hassani et al. ([2023b](https://arxiv.org/html/2408.03164v1#bib.bib11)) presented DCLS as a new method that builds upon DC. Instead of using fixed spacings between non-zero elements in the kernel, DCLS allows learning these spacings through backpropagation. An interpolation technique is used to overcome the discrete nature of the spacings while maintaining the differentiability necessary for backpropagation.
|
| 65 |
+
|
| 66 |
+
### 2.3 Grad-CAM and Threshold-Grad-CAM
|
| 67 |
+
|
| 68 |
+
Grad-CAM is a technique that provides visual explanations for the decisions made by deep neural networks. The method uses the gradients of a target concept, propagated into the final convolutional layer of a deep neural network, to produce a localization map highlighting the regions in the input image that are crucial for predicting this concept (Selvaraju and et al., [2017](https://arxiv.org/html/2408.03164v1#bib.bib20)). Grad-CAM adapts to various network architectures by focusing on the last layer of interest before a classification head or pooling operation. The method is detailed in the supplementary material.
|
| 69 |
+
|
| 70 |
+
#### 2.3.1 Threshold-Grad-CAM
|
| 71 |
+
|
| 72 |
+
In the standard implementation of Grad-CAM, a ReLU activation is applied to the weighted combination of activation maps post-summation. This is predicated on the assumption that positive features should be exclusively highlighted as they are the ones contributing to the class prediction (Selvaraju and et al., [2017](https://arxiv.org/html/2408.03164v1#bib.bib20)). However, our observations suggest that applying ReLU after the summation can inadvertently suppress useful signals when negative activations are present, as they may negate some positive activations when summed. This phenomenon becomes particularly pronounced in architectures such as ConvFormer and CAFormer, where we observed that the resulting heatmaps were no more informative than random heatmaps. We believe this is due to the choice of a specific activation: StarReLU in these two architectures Yu et al. ([2023](https://arxiv.org/html/2408.03164v1#bib.bib25)), which depends on two learnable parameters: scale and bias.
|
| 73 |
+
|
| 74 |
+
To address this issue, we propose applying ReLU to the activation maps before their summation. We then normalize the heatmaps. Finally, the heatmaps are thresholded to retain values above a predetermined threshold (determined experimentally to be 0.3 0.3 0.3 0.3 for optimal results on the ClickMe dataset). The modified Grad-CAM process is described in the supplementary material. Our experiments demonstrated that this modification significantly improved the interpretability of the heatmaps generated for ConvFormer and CAFormer.
|
| 75 |
+
|
| 76 |
+
3 Related work
|
| 77 |
+
--------------
|
| 78 |
+
|
| 79 |
+
The field of interpretable and explainable AI has recently gained significant attention within the AI community. Extensive research efforts range from defining key terms such as interpretability and explainability to developing explainability methods assessing their trustworthiness and evaluating the interpretability of deep learning models. Gilpin et al. ([2018](https://arxiv.org/html/2408.03164v1#bib.bib8)) distinguished between interpretability and explainability and highlighted the challenge of achieving complete and interpretable explanations at the same time. Doshi-Velez and Kim ([2017](https://arxiv.org/html/2408.03164v1#bib.bib5)) defined interpretability as the ability to present model decisions in terms understandable to humans. In their study, Mohseni et al. ([2021](https://arxiv.org/html/2408.03164v1#bib.bib16)) utilized multi-layer human attention masks to benchmark the effectiveness of explanation methods such as Grad-CAM and LIME. Velmurugan et al. ([2020](https://arxiv.org/html/2408.03164v1#bib.bib23)) proposed functionally grounded evaluation metrics that assess the trustworthiness of explainability methods, including LIME and SHAP. Furthermore, Fel et al. ([2022](https://arxiv.org/html/2408.03164v1#bib.bib6)) employed the ClickMe dataset to investigate the alignment between human and deep neural network (DNN) visual strategies, applying a training routine that aligns these strategies, as a result enhancing categorization accuracy. In our study, we align with the interpretability definitions in the literature. We employ human heatmaps from the ClickMe dataset as ground truth to evaluate our modelβs interpretability.
|
| 80 |
+
|
| 81 |
+
4 Experiments
|
| 82 |
+
-------------
|
| 83 |
+
|
| 84 |
+
In this section, we present the experimental setup used to compare the performance and interpretability of our proposed models. Specifically, we calculated the top-1 accuracy of the models trained on the ImageNet1k validation dataset (Deng and et al., [2009](https://arxiv.org/html/2408.03164v1#bib.bib3)) to assess their classification effectiveness. For interpretability, we employed Spearmanβs correlation as a metric to compare the alignment between human-generated heatmaps from the ClickMe dataset and the model-generated heatmaps. We assessed the interpretability of the heatmaps produced using two different methods: Grad-CAM and our proposed Threshold-Grad-CAM.
|
| 85 |
+
|
| 86 |
+
### 4.1 Results
|
| 87 |
+
|
| 88 |
+
We present the results of integrating DCLS into state-of-the-art neural network architectures and our novel update to the Grad-CAM technique. Our experiments evaluated model interpretability, which we defined as the degree of alignment between heatmaps generated by explainability methods and those derived from human visualization strategies.
|
| 89 |
+
|
| 90 |
+
### 4.2 Improvement in Model Interpretability with DCLS
|
| 91 |
+
|
| 92 |
+
Our experiments incorporated DCLS into five model architectures: ResNet, ConvNeXt, CAFormer, ConvFormer, and FastViT. We trained each model on ImageNet1k. When this is mentioned by _dcls, it means that the training has been done by replacing each depth-separable convolution of the baseline model with DCLS.
|
| 93 |
+
|
| 94 |
+
The results showed an enhancement in model interpretability with all models but FastViT_sa24. When equipped with DCLS, ConvNeXt improved in heatmap alignment. The score improved with both Grad-CAM and Threshold-Grad-CAM methods, as shown in Table[1](https://arxiv.org/html/2408.03164v1#S4.T1 "Table 1 β£ 4.2 Improvement in Model Interpretability with DCLS β£ 4 Experiments β£ Dilated Convolution with Learnable Spacings makes visual models more aligned with humans: a Grad-CAM study") and in Figure[2](https://arxiv.org/html/2408.03164v1#S4.F2 "Figure 2 β£ 4.2 Improvement in Model Interpretability with DCLS β£ 4 Experiments β£ Dilated Convolution with Learnable Spacings makes visual models more aligned with humans: a Grad-CAM study").
|
| 95 |
+
|
| 96 |
+
Table 1: Interpretability scores of various models on the ClickMe dataset using GradCAM and Threshold-GradCAM, with and without DCLS. The table presents the top-1 accuracy of each model alongside their respective interpretability scores. Models with the β_dclsβ suffix indicate the use of DCLS.
|
| 97 |
+
|
| 98 |
+

|
| 99 |
+
|
| 100 |
+
Figure 2: Comparison of models interpretability score using Threshold-GradCAM with and without DCLS. Each point represents a different model, plotted according to its interpretability score without DCLS on the x-axis and with DCLS on the y-axis. Models above the dashed line demonstrate improved performance with the inclusion of DCLS.
|
| 101 |
+
|
| 102 |
+
Since Grad-CAM generates random heatmaps on CAFormer and ConvFormer architectures, we experimented with Threshold-Grad-CAM. Similar to ConvNeXt, CaFormer and ConvFormer showed higher interpretability scores when used with DCLS. The FastViT_sa24 model showed a high interpretability score, even without incorporating DCLS, and applying DCLS didnβt improve the score.
|
| 103 |
+
|
| 104 |
+
In addition, DCLS increased the top-1 accuracy in all models but CAFormer_s18 and FastViT_sa36 (Table[1](https://arxiv.org/html/2408.03164v1#S4.T1 "Table 1 β£ 4.2 Improvement in Model Interpretability with DCLS β£ 4 Experiments β£ Dilated Convolution with Learnable Spacings makes visual models more aligned with humans: a Grad-CAM study")).
|
| 105 |
+
|
| 106 |
+
5 Discussion
|
| 107 |
+
------------
|
| 108 |
+
|
| 109 |
+
Except for FastViT, all model families studied show two points: first, an increase in accuracy when the depthwise separable convolution is replaced by DCLS, and second, an increase in the Treshold-Grad-CAM explanability score when this same modification is made. FastViT is a special model because the test inference is performed with a kernel reparametrization identical to that of RepLKNet Ding et al. ([2022](https://arxiv.org/html/2408.03164v1#bib.bib4)). This could interfere with the DCLS method, which is in fact a different reparametrization, and might explain why the results for this family of models were not correlated in the same way as for the other studied models. Furthermore, the results presented here are significant since we tested three different training seeds for the ConvNeXt-T-dcls model and found an accuracy of 82.49Β±0.04 plus-or-minus 82.49 0.04 82.49\pm 0.04 82.49 Β± 0.04 and a Treshold-Grad-CAM score of 0.7466Β±0.004 plus-or-minus 0.7466 0.004 0.7466\pm 0.004 0.7466 Β± 0.004.
|
| 110 |
+
|
| 111 |
+
Fel et al. ([2022](https://arxiv.org/html/2408.03164v1#bib.bib6)) utilized the ClickMe dataset to compare human and DNN visual strategies on ImageNet (Deng and et al., [2009](https://arxiv.org/html/2408.03164v1#bib.bib3)). They adopted a classic explainability method: Image Feature Saliency (Jiang et al., [2015](https://arxiv.org/html/2408.03164v1#bib.bib9)), to generate comparable feature importance maps for 84 deep neural networks (DNNs). They report that as DNNs become more accurate, a trade-off emerges where their alignment with human visual strategies starts to decrease. In contrast, our study employs Grad-CAM for analyzing DNN visual strategies. Unlike the findings of Fel et al. ([2022](https://arxiv.org/html/2408.03164v1#bib.bib6)), our use of Grad-CAM did not reveal such a trade-off. We think that this discrepancy is due to the differences in the explanatory methods used, which highlights the influence of analytical tools in interpreting DNN visual strategies.
|
| 112 |
+
|
| 113 |
+
Furthermore, it is conceivable that models with higher interpretability scores may focus more on those image features mostly correlated with the label class. A preliminary examination of the ClickMe dataset reveals that humans tend to concentrate solely on the object representing the class label within the image, ignoring other less directly related features to the class. This behavior likely stems from a nuanced human understanding of the concepts. Therefore, alignment with human-generated heatmaps might reflect a modelβs robustness.
|
| 114 |
+
|
| 115 |
+
6 Conclusion
|
| 116 |
+
------------
|
| 117 |
+
|
| 118 |
+
In this study, we investigated the interpretability of recent deep neural networks using Grad-CAM-based methods for image classification tasks. We found that employing Dilated Convolution with Learnable Spacings enhances network interpretability. Our results indicate that DCLS-equipped models better align with human visual perception, suggesting that such models effectively capture conceptually relevant features akin to human understanding. Future work could focus on investigating the explainability score of DCLS using black-box methods such as RISE.
|
| 119 |
+
|
| 120 |
+
References
|
| 121 |
+
----------
|
| 122 |
+
|
| 123 |
+
* Chollet [2017] FranΓ§ois Chollet. Xception: Deep learning with depthwise separable convolutions. In CVPR, pages 1251β1258, 2017.
|
| 124 |
+
* Collenne et al. [2024] Jules Collenne, Jilliana Monnier, Rabah Iguernaissi, Motasem Nawaf, Marie-Aleth Richard, Jean-Jacques Grob, Caroline Gaudy-Marqueste, SΓ©verine Dubuisson, and Djamal Merad. Fusion between an algorithm based on the characterization of melanocytic lesionsβ asymmetry with an ensemble of convolutional neural networks for melanoma detection. Journal of Investigative Dermatology, 2024.
|
| 125 |
+
* Deng and et al. [2009] Jia Deng and et al. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248β255. Ieee, 2009.
|
| 126 |
+
* Ding et al. [2022] Xiaohan Ding, Xiangyu Zhang, Jungong Han, and Guiguang Ding. Scaling up your kernels to 31x31: Revisiting large kernel design in cnns. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11963β11975, 2022.
|
| 127 |
+
* Doshi-Velez and Kim [2017] Finale Doshi-Velez and Been Kim. Towards a rigorous science of interpretable machine learning. stat, 1050:2, 2017.
|
| 128 |
+
* Fel et al. [2022] Thomas Fel, Ivan F Rodriguez Rodriguez, Drew Linsley, and Thomas Serre. Harmonizing the object recognition strategies of deep neural networks with humans. Advances in neural information processing systems, 35:9432β9446, 2022.
|
| 129 |
+
* Friedman [2001] Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189β1232, 2001.
|
| 130 |
+
* Gilpin et al. [2018] Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pages 80β89, 2018.
|
| 131 |
+
* Jiang et al. [2015] Ming Jiang, Shengsheng Huang, Juanyong Duan, and Qi Zhao. Salicon: Saliency in context. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1072β1080, 2015.
|
| 132 |
+
* Khalfaoui-Hassani et al. [2023a] Ismail Khalfaoui-Hassani, TimothΓ©e Masquelier, and Thomas Pellegrini. Audio classification with dilated convolution with learnable spacings. In NeurIPS 2023 Workshop on Machine Learning for Audio, 2023.
|
| 133 |
+
* Khalfaoui-Hassani et al. [2023b] Ismail Khalfaoui-Hassani, Thomas Pellegrini, and TimothΓ©e Masquelier. Dilated convolution with learnable spacings. In The Eleventh International Conference on Learning Representations, 2023.
|
| 134 |
+
* Khalfaoui-Hassani et al. [2023c] Ismail Khalfaoui-Hassani, Thomas Pellegrini, and TimothΓ©e Masquelier. Dilated convolution with learnable spacings: beyond bilinear interpolation. In ICML 2023 Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators, 2023.
|
| 135 |
+
* Linsley et al. [2019] Drew Linsley, Dan Shiebler, Sven Eberhardt, and Thomas Serre. Learning what and where to attend. In International Conference on Learning Representations, 2019.
|
| 136 |
+
* Liu et al. [2022] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11976β11986, 2022.
|
| 137 |
+
* Lundberg and Lee [2017] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.
|
| 138 |
+
* Mohseni et al. [2021] Sina Mohseni, Jeremy E Block, and Eric Ragan. Quantitative evaluation of machine learning explanations: A human-grounded benchmark. In Proceedings of the 26th International Conference on Intelligent User Interfaces, IUI β21, page 22β31, New York, NY, USA, 2021. Association for Computing Machinery.
|
| 139 |
+
* Ribeiro et al. [2016] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. β why should i trust you?β explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135β1144, 2016.
|
| 140 |
+
* Sandler et al. [2018] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510β4520, 2018.
|
| 141 |
+
* Schwalbe and Finzel [2023] Gesina Schwalbe and Bettina Finzel. A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts. Data Mining and Knowledge Discovery, pages 1β59, 2023.
|
| 142 |
+
* Selvaraju and et al. [2017] Ramprasaath R Selvaraju and et al. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618β626, 2017.
|
| 143 |
+
* Speith [2022] Timo Speith. A review of taxonomies of explainable artificial intelligence (xai) methods. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 2239β2250, 2022.
|
| 144 |
+
* Vasu et al. [2023] Pavan Kumar Anasosalu Vasu, James Gabriel, Jeff Zhu, Oncel Tuzel, and Anurag Ranjan. Fastvit: A fast hybrid vision transformer using structural reparameterization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5785β5795, 2023.
|
| 145 |
+
* Velmurugan et al. [2020] Mythreyi Velmurugan, Chun Ouyang, Catarina Moreira, and Renuka Sindhgatta. Evaluating explainable methods for predictive process analytics: A functionally-grounded approach, 2020.
|
| 146 |
+
* Yu and Koltun [2015] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015.
|
| 147 |
+
* Yu et al. [2023] Weihao Yu, Chenyang Si, Pan Zhou, Mi Luo, Yichen Zhou, Jiashi Feng, Shuicheng Yan, and Xinchao Wang. Metaformer baselines for vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
|
| 148 |
+
* Zar [2005] Jerrold H Zar. Spearman rank correlation. Encyclopedia of Biostatistics, 7, 2005.
|
| 149 |
+
|
| 150 |
+
Appendix A Appendix: Grad-CAM Implementation
|
| 151 |
+
--------------------------------------------
|
| 152 |
+
|
| 153 |
+
Algorithm 1 Grad-CAM Implementation
|
| 154 |
+
|
| 155 |
+
Input: Image I πΌ I italic_I, Target class c π c italic_c, Trained Convolutional Neural Network CNN
|
| 156 |
+
|
| 157 |
+
Output: Heatmap H π» H italic_H visually highlighting influential regions for class c π c italic_c
|
| 158 |
+
|
| 159 |
+
1:Forward Pass: Process image
|
| 160 |
+
|
| 161 |
+
I πΌ I italic_I
|
| 162 |
+
through CNN to obtain feature maps at the last convolutional layer
|
| 163 |
+
|
| 164 |
+
A π΄ A italic_A
|
| 165 |
+
. Let
|
| 166 |
+
|
| 167 |
+
A k superscript π΄ π A^{k}italic_A start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT
|
| 168 |
+
be the feature map for the
|
| 169 |
+
|
| 170 |
+
k π k italic_k
|
| 171 |
+
-th channel.
|
| 172 |
+
|
| 173 |
+
2:Compute Gradients: Compute the gradient of the loss for class
|
| 174 |
+
|
| 175 |
+
c π c italic_c
|
| 176 |
+
, denoted
|
| 177 |
+
|
| 178 |
+
y c superscript π¦ π y^{c}italic_y start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT
|
| 179 |
+
, with respect to the feature maps
|
| 180 |
+
|
| 181 |
+
A π΄ A italic_A
|
| 182 |
+
, resulting in
|
| 183 |
+
|
| 184 |
+
βy cβA k superscript π¦ π superscript π΄ π\frac{\partial y^{c}}{\partial A^{k}}divide start_ARG β italic_y start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT end_ARG start_ARG β italic_A start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT end_ARG
|
| 185 |
+
.
|
| 186 |
+
|
| 187 |
+
3:Global Average Pooling of Gradients: For each feature map channel
|
| 188 |
+
|
| 189 |
+
k π k italic_k
|
| 190 |
+
, compute the global average of the gradients:
|
| 191 |
+
|
| 192 |
+
Ξ± k c=1 Zβ’βiβjβy cβA iβ’j k superscript subscript πΌ π π 1 π subscript π subscript π superscript π¦ π superscript subscript π΄ π π π\alpha_{k}^{c}=\frac{1}{Z}\sum_{i}\sum_{j}\frac{\partial y^{c}}{\partial A_{ij% }^{k}}italic_Ξ± start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_Z end_ARG β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT β start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT divide start_ARG β italic_y start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT end_ARG start_ARG β italic_A start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT end_ARG
|
| 193 |
+
|
| 194 |
+
where
|
| 195 |
+
|
| 196 |
+
i,j π π i,j italic_i , italic_j
|
| 197 |
+
index spatial dimensions and
|
| 198 |
+
|
| 199 |
+
Z π Z italic_Z
|
| 200 |
+
is the number of elements in
|
| 201 |
+
|
| 202 |
+
A k superscript π΄ π A^{k}italic_A start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT
|
| 203 |
+
.
|
| 204 |
+
|
| 205 |
+
4:Weighted Combination of Feature Maps: Compute the weighted sum of the feature maps using the weights
|
| 206 |
+
|
| 207 |
+
Ξ± k c superscript subscript πΌ π π\alpha_{k}^{c}italic_Ξ± start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT
|
| 208 |
+
:
|
| 209 |
+
|
| 210 |
+
L c=ReLUβ’(βk Ξ± k cβ’A k)superscript πΏ π ReLU subscript π superscript subscript πΌ π π superscript π΄ π L^{c}=\text{ReLU}\left(\sum_{k}\alpha_{k}^{c}A^{k}\right)italic_L start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT = ReLU ( β start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT italic_Ξ± start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT italic_A start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT )
|
| 211 |
+
|
| 212 |
+
5:Generate Heatmap: Resize
|
| 213 |
+
|
| 214 |
+
L c superscript πΏ π L^{c}italic_L start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT
|
| 215 |
+
to the size of the input image
|
| 216 |
+
|
| 217 |
+
I πΌ I italic_I
|
| 218 |
+
to get the heatmap
|
| 219 |
+
|
| 220 |
+
H π» H italic_H
|
| 221 |
+
.
|
| 222 |
+
|
| 223 |
+
6:Overlay Heatmap on Original Image: Superimpose
|
| 224 |
+
|
| 225 |
+
H π» H italic_H
|
| 226 |
+
onto the original image
|
| 227 |
+
|
| 228 |
+
I πΌ I italic_I
|
| 229 |
+
for visualization, adjusting the transparency to ensure visibility of underlying features.
|
| 230 |
+
|
| 231 |
+
Appendix B Appendix: Threshold-Grad-CAM Implementation
|
| 232 |
+
------------------------------------------------------
|
| 233 |
+
|
| 234 |
+
Algorithm 2 Threshold GradCAM
|
| 235 |
+
|
| 236 |
+
Input: Weighted activation maps A k superscript π΄ π A^{k}italic_A start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT
|
| 237 |
+
|
| 238 |
+
Parameter: Threshold value t=0.3 π‘ 0.3 t=0.3 italic_t = 0.3
|
| 239 |
+
|
| 240 |
+
Output: Final heatmap H π» H italic_H
|
| 241 |
+
|
| 242 |
+
1:Apply ReLU Activation: Apply the ReLU function to each weighted activation map to filter out negative values. This step prevents the cancellation of positive activations during summation:
|
| 243 |
+
|
| 244 |
+
A ReLU k=ReLUβ’(Ξ± k cβ’A k)subscript superscript π΄ π ReLU ReLU superscript subscript πΌ π π superscript π΄ π A^{k}_{\text{ReLU}}=\text{ReLU}\left(\alpha_{k}^{c}A^{k}\right)italic_A start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ReLU end_POSTSUBSCRIPT = ReLU ( italic_Ξ± start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT italic_A start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT )
|
| 245 |
+
|
| 246 |
+
2:Summation of Activated Maps: Sum the ReLU-activated maps:
|
| 247 |
+
|
| 248 |
+
S=βk A ReLU k π subscript π subscript superscript π΄ π ReLU S=\sum_{k}A^{k}_{\text{ReLU}}italic_S = β start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT italic_A start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ReLU end_POSTSUBSCRIPT
|
| 249 |
+
|
| 250 |
+
3:Normalization: Normalize the summed activation map
|
| 251 |
+
|
| 252 |
+
S π S italic_S
|
| 253 |
+
to ensure values are scaled consistently:
|
| 254 |
+
|
| 255 |
+
N=S maxβ‘(S)π π π N=\frac{S}{\max(S)}italic_N = divide start_ARG italic_S end_ARG start_ARG roman_max ( italic_S ) end_ARG
|
| 256 |
+
|
| 257 |
+
4:Apply Thresholding: Apply a threshold of
|
| 258 |
+
|
| 259 |
+
t π‘ t italic_t
|
| 260 |
+
to reduce noise and enhance the focus on relevant regions:
|
| 261 |
+
|
| 262 |
+
H={N ifβ’Nβ₯t 0 otherwise π» cases π if π π‘ 0 otherwise H=\begin{cases}N&\text{if }N\geq t\\ 0&\text{otherwise}\end{cases}italic_H = { start_ROW start_CELL italic_N end_CELL start_CELL if italic_N β₯ italic_t end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL otherwise end_CELL end_ROW
|
| 263 |
+
|
| 264 |
+
Our revised approach yields more coherent and focused visual explanations, as validated by quantitative assessments of the ClickMe dataset.
|
| 265 |
+
|
| 266 |
+
Appendix C Appendix: DCLS vs. Baseline: Interpretability Analysis with Grad-CAM and Threshold-Grad-CAM
|
| 267 |
+
------------------------------------------------------------------------------------------------------
|
| 268 |
+
|
| 269 |
+

|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
|
| 273 |
+
Figure 3: Comparative analysis of interpretability scores across different models using Grad-CAM and Threshold-Grad-CAM techniques. Top: The interpretability scores with Grad-CAM. Bottom: The interpretability scores with Threshold-Grad-CAM. Both subfigures highlight the difference in scores with and without DCLS. The results indicate that DCLS generally improves interpretability scores for most models.
|
| 274 |
+
|
| 275 |
+
Appendix D Appendix: Model Size vs. Interpretability Score Using Threshold-Grad-CAM
|
| 276 |
+
-----------------------------------------------------------------------------------
|
| 277 |
+
|
| 278 |
+

|
| 279 |
+
|
| 280 |
+
Figure 4: Correlation between model size and Interpretability for baseline models, using Threshold-Grad-CAM scores. Larger models tend to have higher interpretability scores, suggesting a positive correlation between model size and explainability in baseline models.
|
| 281 |
+
|
| 282 |
+
Appendix E Appendix: Visualizing Grad-CAM and Threshold-Grad-CAM Heatmaps
|
| 283 |
+
-------------------------------------------------------------------------
|
| 284 |
+
|
| 285 |
+

|
| 286 |
+
|
| 287 |
+
Figure 5: ResNet50 Grad-CAM heatmaps and Threshold-Grad-CAM heatmaps across 10 randomly chosen license-free internet images. Top row: Original images. Middle row: Images with Grad-CAM heatmaps. Bottom row: Images with Threshold-Grad-CAM heatmaps.
|
| 288 |
+
|
| 289 |
+

|
| 290 |
+
|
| 291 |
+
Figure 6: ConvFormer Grad-CAM heatmaps and Threshold-Grad-CAM heatmaps across 10 randomly chosen license-free internet images. Top row: Original images. Middle row: Images with Grad-CAM heatmaps. Bottom row: Images with Threshold-Grad-CAM heatmaps.
|