
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,
Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An
Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on
Learning Representations, 2020. 1,2,4
Dumitru Erhan, Y. Bengio, Aaron Courville, and Pascal Vincent. Visualizing Higher-Layer Features of a Deep
Network. Technical Report, Univeristé de Montréal, 2009. 1,3
Fenglei Fan, Jinjun Xiong, Mengzhou Li, and Ge Wang. On Interpretability of Artificial Neural Networks: A Survey,
2021. arXiv:2001.02522. 1
Paolo Fantozzi and Maurizio Naldi. The Explainability of Transformers: Current Status and Directions. Computers,
13(4):92, 2024. Publisher: Multidisciplinary Digital Publishing Institute. 4
Zahra Fayyaz, Aya Altamimi, Carina Zoellner, Nicole Klein, Oliver T. Wolf, Sen Cheng, and Laurenz Wiskott. A
Model of Semantic Completion in Generative Episodic Memory. Neural Computation, 34(9):1841–1870, 2022. 18
D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars. Microsoft COCO: common objects in context. ECCV 2014. LNCS,
vol. 8693, 2014. 6,17
Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss Surfaces, Mode
Connectivity, and Fast Ensembling of DNNs. In Advances in Neural Information Processing Systems. Curran
Associates, Inc., 2018. 3
Amin Ghiasi, Hamid Kazemi, Steven Reich, Eitan Borgnia, Manli Shu, Micah Goldblum, Andrew Gordon Wilson,
and Tom Goldstein. What do Vision Transformers Learn? A Visual Exploration. 2022. 4
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples, 2015.
arXiv:1412.6572 [stat]. 3
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In 2016
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, Las Vegas, NV, USA,
2016. IEEE. 6
Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, and Joao Carreira. Perceiver:
General Perception with Iterative Attention, 2021. arXiv:2103.03206 [cs]. 17
David Kappel, Khaleelulla Khan Nazeer, Cabrel Teguemne Fokam, Christian Mayr, and Anand Subramoney. Block-
local learning with probabilistic latent representations, 2023. arXiv:2305.14974 [cs]. 18
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On
Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, 2017. arXiv:1609.04836 [cs]. 3
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer
Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. Segment Anything, 2023.
arXiv:2304.02643. 17
Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of Neural Network Represen-
tations Revisited. In Proceedings of the 36th International Conference on Machine Learning, pages 3519–3529.
PMLR, 2019. ISSN: 2640-3498. 3
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Convolutional Neural
Networks. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2012. 1,3,8
Feng Li, Hao Zhang, Huaizhe xu, Shilong Liu, Lei Zhang, Lionel M. Ni, and Heung-Yeung Shum. Mask DINO: Towards
A Unified Transformer-based Framework for Object Detection and Segmentation, 2022a. arXiv:2206.02777 [cs]. 2
Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the Loss Landscape of Neural
Nets. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2018. 3
Xuhong Li, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Xiao Zhang, Ji Liu, Jiang Bian, and Dejing Dou. Interpretable
Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond, 2022b. arXiv:2103.10689. 1
Yong Li, Naipeng Miao, Liangdi Ma, Feng Shuang, and Xingwen Huang. Transformer for object detection: Review
and benchmark. Engineering Applications of Artificial Intelligence, 126:107021, 2023a. 1
19