mishig HF Staff commited on
Commit
f972c4a
·
verified ·
1 Parent(s): 7753d4b

Add 1 files

Browse files
Files changed (1) hide show
  1. 2401/2401.08860.md +3435 -0
2401/2401.08860.md ADDED
@@ -0,0 +1,3435 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Cross-level Multi-instance Distillation for Self-supervised Fine-grained Visual Categorization
2
+
3
+ URL Source: https://arxiv.org/html/2401.08860
4
+
5
+ Markdown Content:
6
+ Back to arXiv
7
+
8
+ This is experimental HTML to improve accessibility. We invite you to report rendering errors.
9
+ Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off.
10
+ Learn more about this project and help improve conversions.
11
+
12
+ Why HTML?
13
+ Report Issue
14
+ Back to Abstract
15
+ Download PDF
16
+ Abstract
17
+ IIntroduction
18
+ IIRelated work
19
+ IIIPreliminary
20
+ IVProposed Approach
21
+ VExperimental Analysis
22
+ VIConclusion
23
+ References
24
+
25
+ HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
26
+
27
+ failed: utfsym
28
+
29
+ Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.
30
+
31
+ License: arXiv.org perpetual non-exclusive license
32
+ arXiv:2401.08860v3 [cs.CV] 24 Jun 2025
33
+ Cross-level Multi-instance Distillation for Self-supervised Fine-grained Visual Categorization
34
+ Qi Bi,  Wei Ji∗, Jingjun Yi, Haolan Zhan, Gui-Song Xia∗
35
+ Qi Bi is now with University of Amsterdam, 1098XH, the Netherlands. He was with Wuhan University, Wuhan, 430072, China.Jingjun Yi is with Wuhan University, Wuhan, 430072, China.Wei Ji is with Yale University, New Haven, the United States.Haolan Zhan is with Monash University, Australia. G.-S. Xia is with the School of Artificial Intelligence, National Engineering Research Center for Multimedia Software, School of Computer Science, and also the State Key Lab. LIESMARS, Wuhan University, Wuhan, 430072 China. ∗Correspondence: wei.ji@yale.edu and guisong.xia@whu.edu.cn.
36
+ Abstract
37
+
38
+ High-quality annotation of fine-grained visual categories demands great expert knowledge, which is taxing and time consuming. Alternatively, learning fine-grained visual representation from enormous unlabeled images (e.g., species, brands) by self-supervised learning becomes a feasible solution. However, recent investigations find that existing self-supervised learning methods are less qualified to represent fine-grained categories. The bottleneck lies in that the pre-trained class-agnostic representation is built from every patch-wise embedding, while fine-grained categories are only determined by several key patches of an image. In this paper, we propose a Cross-level Multi-instance Distillation (CMD) framework to tackle this challenge. Our key idea is to consider the importance of each image patch in determining the fine-grained representation by multiple instance learning. To comprehensively learn the relation between informative patches and fine-grained semantics, the multi-instance knowledge distillation is implemented on both the region/image crop pairs from the teacher and student net, and the region-image crops inside the teacher / student net, which we term as intra-level multi-instance distillation and inter-level multi-instance distillation. Extensive experiments on several commonly used datasets, including CUB-200-2011, Stanford Cars and FGVC Aircraft, demonstrate that the proposed method outperforms the contemporary methods by up to 10.14% and existing state-of-the-art self-supervised learning approaches by up to 19.78% on both top-1 accuracy and Rank-1 retrieval metric. Source code is available at https://github.com/BiQiWHU/CMD.
39
+
40
+ Index Terms: Fine-grained Visual Categorization, Self-supervised Learning, Multiple Instance Learning, Knowledge Distillation.
41
+ IIntroduction
42
+ I-AProblem Statement
43
+
44
+ Fine-grained visual categorization (FGVC) aims to discern similar objects, brand and species from the same coarse-grained category to their individual fine-grained categories [1, 2, 3, 4, 5]. A key bottleneck of FGVC is the scarcity of large-scale high-quality fine-grained annotation, which costs extensive time and effort for experts with proficient domain knowledge [6]. While the amount of sample annotation is large-scale in many vision tasks [7, 8], the sample amount of most existing FGVC datasets is still under ten-thousand level. As there are huge amount of unlabeled images from similar species, brands and objects, self-supervised learning (SSL) techniques, which learn the class-agnostic representation from unlabeled images [9, 10, 11, 12], provide a feasible substitution to alleviate this dilemma.
45
+
46
+ Figure 1:Key challenge to incorporate self-supervised learning (SSL) for fine-grained visual categorization (FGVC) lies in the learning objective inconsistency. left: SSL leverages image-level objectives to learn the same coarse-grained feature space; right: FGVC leverages region-level objectives to learn subtle parts (in red boxes) to discern an image from other fine-grained categories under the same coarse-grained category.
47
+
48
+ Recent works have explored the possibility, but unfortunately found that existing SSL pipelines are less qualified to highlight the key part patterns (e.g., informative patches) for fine-grained semantics [13, 14]. Specifically, an indiscriminative FGVC feature space from contrastive learning is observed [13], and a number of dominant SSL pipelines show significantly inferior performance compared with the fully-supervised upper bound [14]. Overall, adapting SSL for FGVC remains intriguing but less explored.
49
+
50
+ The key challenge of self-supervised fine-grained visual representation learning lies in the learning objective inconsistency between FGVC and SSL. Existing SSL pipelines leverage image-level learning objectives (illustrated in Fig. 1 left), which minimize the distances between two augmented views by taking the feature embedding from an image into account [13]. In contrast, FGVC relies on region-level learning objectives (illustrated in Fig. 1 right). As the fine-grained categories from a certain coarse-grained category only have subtle differences, the fine-grained semantics are usually only determined by some key local regions. Thus, the information from the key part patterns can be overwhelmed from the huge amount of indiscriminative patterns by existing SSL pipelines.
51
+
52
+ Figure 2:Flowchart of the existing DINO paradigm (left) and the proposed Cross-level Multi-instance Distillation CMD (right). MIM refers to multi-instance modeling.
53
+ I-BMotivation & Objectives
54
+
55
+ In general, the pioneering self-supervised fine-grained visual categorization methods either directly apply the image-level contrastive learning objective [13] or incorporate the region-level activation patterns when doing inference on the acquired contrastive representation [14]. Unfortunately, the aforementioned learning objective inconsistency is not essentially addressed. In this paper, we are motivated to push the frontier of self-supervised fine-grained visual representation learning by addressing the learning objective inconsistency between SSL and FGVC. In contrast to existing methods [13, 14], our learning objective is devised in an innovative way such that, the class-agnostic representation can not only focus on the discriminative regions when learning, but also explore the relation between the region patterns and the image-level context. The specific objectives are two-fold.
56
+
57
+ I-B1The key local regions that determine the fine-grained categories contribute more to the pre-trained class-agnostic representation
58
+
59
+ This objective is intuitive and straight-forward to understand. Only when the subtle fine-grained patterns, which rest in several local regions of an image, are highlighted in the SSL pipeline, can the pre-trained class-agnostic representation carries more discriminative information for fine-grained patterns.
60
+
61
+ To realize this objective, multiple instance learning (MIL) is introduced [15, 16, 17] to model the relation between image and each patch, where each patch is regarded as an instance and each image is regarded as a bag [18, 19, 20]. In contrast to the conventional MIL with available bag-level annotation [16, 17], the bag annotation is unavailable in our self-supervised fine-grained learning problem. To this end, the bag representations from the augmented crop pairs are learned under the self-supervised knowledge distillation without label (DINO) paradigm [12], so as to guide the instance-level representation learning.
62
+
63
+ I-B2Aligning the fine-grained information between region-level and image-level
64
+
65
+ Modern SSL pipelines (e.g., EsViT [21]) use both region- and image- level crops to learn pre-trained class-agnostic representation, which provides more access to extensively exploit the informative patches corresponding to the fine-grained semantics. Ideally, the cropped image-region pairs that represent similar fine-grained categories need to be close together, while those represent dissimilar fine-grained categories need to be far apart.
66
+
67
+ To realize this objective, we propose both intra-level and inter-level multi-instance distillation (in Fig. 2). The intra-level multi-distillation acquires the fine-grained knowledge from image-level crops and region-level crops from the teacher-student pair. In contrast, the inter-level multi-instance distillation acquires the fine-grained knowledge from image-region crop pairs inside the teacher/student net. The joint distillation of both helps comprehensively learn the relation between informative parts and fine-grained semantics of an image.
68
+
69
+ I-CContribution
70
+
71
+ Compared with existing FGVC methods that leverage intra-/inter- level relation, the propose method makes the following advancements. Firstly, while existing methods rely on the fine-grained semantic annotation to constrain the intra-/inter- relation, the proposed method innovatively constrains the intra-/inter- level representation by the same fine-grained semantics without any label supervision. Secondly, existing methods do not consider the aforementioned learning objective inconsistency problem in self-supervised FGVC. In contrast, the proposed method resolves it by incorporating the regions of fine-grained patterns to the self-supervised representation, and constrains both regions and images under the same fine-grained semantics. Thirdly, the proposed method leverages the multiple instance learning (MIL) formulation to theoretically realize this constraint. MIL has been rarely used for self-supervised learning and self-supervised fine-grained representation, where the proposed method bridges the research gap.
72
+
73
+ From a technical perspective, existing knowledge distillation with no label (DINO) paradigm only learns a global representation from image-level crops. In contrast, the proposed method advances DINO by both image-level and region-level crops, so as to perceive the subtle fine-grained patterns from the region while at the same time be constrained by the context information from the image. Besides, it makes the first exploration to simultaneously model the relation for both region- and image- level bags by MIL formulation. Further, the proposed intra- and inter- level multi-instance distillation, under the strict MIL formulation, allow both the instance embeddings and the bag embeddings to be constrained by the fine-grained semantics, which is demonstrated by a theoretical analysis and significant improvement over the baseline.
74
+
75
+ Though self-supervised FGVC still exhibits an inferior performance to the state-of-the-art fully-supervised FGVC methods, its pre-training does not involve the fine-grained annotation. This property is particularly important for FGVC, as the fine-grained annotation demands expertise in fields such as biology, and is extremely difficult to collect. Therefore, self-supervised FGVC methods show great potential to alleviate the annotation labor, harness the huge amount of unlabeled data and enhance the robustness to unseen fine-grained data. In the future, assume that millions of fine-grained images of the species can be collected, self-supervised pre-training is more effective to learn a discriminative fine-grained image representation. Thus, self-supervised FGVC methods hold importance in such scenarios and can be encouraged for more future exploration.
76
+
77
+ Concretely, our contribution can be summarized as follows:
78
+
79
+
80
+
81
+ We propose a Cross-level Multi-instance Distillation (CMD) framework for self-supervised fine-grained visual categorization. Multiple instance learning is introduced to model the relation between fine-grained semantics and image patches.
82
+
83
+
84
+
85
+ We propose both intra-level and inter-level learning objectives for multi-instance distillation, which learn the relation between informative parts and fine-grained semantics among the image-level and region-level crops from both teacher and student net.
86
+
87
+
88
+
89
+ Experiments on CUB-200-2011, Stanford Cars and FGVC Aircraft show that the proposed CMD outperforms the contemporary method by up to 10.14% and the state-of-the-art self-supervised representation learning methods by up to 19.78% on top-1 accuracy.
90
+
91
+ The remainder of this paper is organized as follows. In Section II, the related work is summarized. In Section III, a detailed problem formulation of MIL and self-supervised FGVC is discussed. In Section IV, the proposed CMD is presented in detail. In Section V, extensive experiments are implemented to validate the effectiveness of the proposed CMD. Finally, we draw the conclusion in Section VI.
92
+
93
+ IIRelated work
94
+ II-AFine-grained Visual Categorization
95
+
96
+ Fine-grained visual categorization (FGVC) is a fundamental task for image understanding and pattern recognition. It aims to discern a specific fine-grained category (e.g., golden retriever) from the subordinate coarse-grained category (e.g., dog). The discriminative patterns among different fine-grained categories can be rather subtle, which are usually termed as parts. They may only occupy several or tens of pixels in an entire image.
97
+
98
+ Existing FGVC approaches are usually fully-supervised. They follow the part-driven paradigm, which discerns the fine-grained semantics from the discriminative parts. The utilization of parts can be either explicit or implicit. The explicit part selection pipelines are heavily inherited from the visual localization frameworks [3, 4, 1]. More recently, implicit part selection approaches have been extensively studies. Technically, these methods either build multi-scale representations [22, 23, 24, 25, 26] from CNN [27, 22, 28] and ViT models [26, 29, 30], or utilize attention modules [24, 31, 32, 33].
99
+
100
+ On the other hand, there is a recent trend to combine FGVC with other tasks, such as object re-identification [34, 35], image retrieval [36, 37], and multi-modal learning [38, 2]. In general, enhancing the representation from subtle fine-grained patterns is able to also improve the discriminative ability of these tasks, yielding a better performance. However, how to distillate the key local patterns under multiple instance learning (MIL) formulation has not been explored by prior works, where the MIL formulation can pose a rigours constraint between the subtle patterns and fine-grained semantic categories. Notice that, although several recent works leverage self-supervised techniques to refine the fine-grained patterns [39, 40, 41], these methods still require the fine-grained annotations to supervise the model and thus are under the fully-supervised paradigm.
101
+
102
+ II-BSelf-supervised Visual Representation Learning
103
+
104
+ In the context of computer vision, self-supervised learning (SSL) intends to learn a pre-text visual representation from unlabeled images for down-stream tasks [42]. Some representative works such as SimCLR [9], MoCo [11], BYOL [43], SimSiam [10], BarlowTwins [44] and VICReg [45]. More recently, multiple advanced SSL methods have been proposed [46, 47, 48]. The overall idea of these methods is to learn an image representation from two or multiple augmented views.
105
+
106
+ On the other hand, self-distillation with no labels (DINO) shows its superiority [12] against these image-level SSL methods. Vision Transformer based DINO shows emerging properties to perceive more details in the image apart from the image-level semantics [12], which yield a stronger feature representation. Later, its advanced version efficient self-supervised vision transformer (EsViT) is proposed to improve the computational efficiency and the view interaction [21]. Compared with CNN based contrastive learning frameworks, ViT based DINO frameworks [12, 21] are more capable to preserve the object structures in an image. It motivates us to shift the focus from CNN backbone to ViT backbone for fine-grained self-supervised learning. To the best of our knowledge, learning fine-grained visual representation from self-supervised knowledge distillation remains unexplored.
107
+
108
+ II-CSelf-supervised Fine-grained Visual Categorization
109
+
110
+ Self-supervised FGVC is an emerging research topic, which intends to learn a pre-trained class-agnostic representation that can discriminate the fine-grained semantics. Pioneer works are summarized as follows. Cole et al. empirically found some existing self-supervised learning pipelines suffered significant performance decline for FGVC task [13]. Shu et al. proposed a common rational learning strategy (LCR) for fine-grained visual categorization and retrieval [14]. Kim et al. considered the fine-grained visual representation learning under a self-supervised open-set scenario [49]. Hu et al. proposed an asymmetric augmented self-supervised learning scheme, where the pre-trained representation can be used to query the fine-grained images [50].
111
+
112
+ On the other hand, as earlier FGVC pipelines adapt the localization pipeline for part proposals, in this paper, we consider the self-supervised localization approaches [51, 52, 53, 54, 55] as an alternative solution for self-supervised FGVC and make extensive comparison. Compared with these methods, the proposed CMD does not reply on the part proposals as a prerequisite to learn fine-grained representation. Instead, it highlights the key instances implicitly in an one-stage self-supervised representation learning pipeline.
113
+
114
+ II-DMultiple Instance Learning
115
+
116
+ Multiple instance learning (MIL) is a typical machine learning tool dealing with this so-called weak annotation scenario. It models an image as a bag of instances with only a single bag annotation, which is able to find the key instances that trigger the bag annotation.
117
+
118
+ In the deep learning era, MIL has been integrated into deep learning models in an end-to-end manner, which is termed as deep MIL. Specifically, residual connection and deep supervision has turned out effective to improve the deep MIL features [17, 56]. Attention weights can also enhance the representation capability of deep MIL features [16, 57, 58].
119
+
120
+ More recently, due to the rapid development of deep learning and computer vision, deep MIL has also demonstrated its representation learning ability on other tasks such as whole slide image classification [59, 60], aerial scene classification [61, 62], weakly-supervised object detection [63, 20] and weakly-supervised segmentation [64].
121
+
122
+ IIIPreliminary
123
+ III-AMIL Formulation
124
+
125
+ Given a bag
126
+ 𝐗
127
+ that consists of a set of instances
128
+ 𝐗
129
+ =
130
+ {
131
+ 𝐱
132
+ 1
133
+ ,
134
+
135
+ ,
136
+ 𝐱
137
+ 𝐾
138
+ }
139
+ . The bag
140
+ 𝐗
141
+ corresponds to a binary label
142
+ 𝑌
143
+
144
+ {
145
+ 0
146
+ ,
147
+ 1
148
+ }
149
+ , while there is no label for each instance.
150
+
151
+ Multiple instance learning (MIL) assumes that each instance
152
+ 𝐱
153
+ 𝑘
154
+ (
155
+ 𝑡
156
+ =
157
+ 1
158
+ ,
159
+
160
+ ,
161
+ 𝑇
162
+ ) exists a label
163
+ 𝑦
164
+ 𝑘
165
+
166
+ {
167
+ 0
168
+ ,
169
+ 1
170
+ }
171
+ , but these instance-level labels remain unknown during the learning process. It is formulated as
172
+
173
+
174
+ 𝑌
175
+ =
176
+ {
177
+ 0
178
+
179
+ if 
180
+
181
+ 𝑘
182
+ =
183
+ 1
184
+ 𝐾
185
+ 𝑦
186
+ 𝑘
187
+ =0 
188
+
189
+
190
+ 1
191
+
192
+ else
193
+ .
194
+
195
+ (1)
196
+
197
+ Avoiding the gradient vanishing problem is the key challenge to incorporate MIL into deep neural networks. The bag label is assumed to be the Bernoulli distribution [16, 65], given by
198
+ 𝑌
199
+
200
+ [
201
+ 0
202
+ ,
203
+ 1
204
+ ]
205
+ .
206
+
207
+ III-BMIL Aggregation Function
208
+
209
+ Assume that each instance
210
+ 𝐱
211
+ 𝑘
212
+ corresponds to an instance embedding
213
+ 𝐳
214
+ 𝐱
215
+ 𝑘
216
+ , and the bag
217
+ 𝐗
218
+ corresponds to a bag embedding
219
+ 𝐳
220
+ . The aggregation function
221
+ 𝑔
222
+
223
+ (
224
+
225
+ )
226
+ bridges the gap between the instance embedding and bag embedding, by aggregating
227
+ 𝐳
228
+ ����
229
+ 𝑘
230
+ into
231
+ 𝐳
232
+ . The aggregation function
233
+ 𝑔
234
+
235
+ (
236
+
237
+ )
238
+ is assumed to be permutation invariant.
239
+
240
+ Lemma 1. A scoring function for a set of instances
241
+ 𝐗
242
+ ,
243
+ 𝑆
244
+
245
+ (
246
+ 𝑋
247
+ )
248
+
249
+
250
+ , is permutation invariant to the elements in
251
+ 𝐗
252
+ , if and only if it can be decomposed in the following form:
253
+
254
+
255
+ 𝑆
256
+
257
+ (
258
+ 𝑋
259
+ )
260
+ =
261
+
262
+
263
+ (
264
+
265
+ 𝐱
266
+
267
+ 𝐗
268
+ 𝑓
269
+
270
+ (
271
+ 𝐱
272
+ )
273
+ )
274
+ ,
275
+
276
+ (2)
277
+
278
+ where
279
+ 𝑓
280
+ and
281
+
282
+ are suitable transformations.
283
+
284
+ Proof. Please refer to [65] for the detailed proof.
285
+
286
+ III-CMIL Formulation for Self-supervised FGVC
287
+
288
+ In self-supervised fine-grained visual representation learning task, there is neither explicit instance label
289
+ 𝑦
290
+ 𝑘
291
+ nor bag label
292
+ 𝑌
293
+ . Following the MIL paradigm (Eq. 2), the bag embedding is a pre-trained class-agnostic representation learned by a self-supervised framework.
294
+
295
+ Specifically, given two augmented views, following Eq. 2, the bag embedding of each view, denoted as
296
+ 𝐳
297
+ 𝑡
298
+ and
299
+ 𝐳
300
+ 𝑠
301
+ , can be generated. Then, the MIL formulation under the self-supervised learning can be presented as
302
+
303
+
304
+
305
+
306
+ (
307
+ 𝐳
308
+ 𝑡
309
+ ,
310
+ 𝐳
311
+ 𝑠
312
+ )
313
+ ,
314
+
315
+ (3)
316
+
317
+ where
318
+
319
+ is a learning objective function to minimize the difference between
320
+ 𝐳
321
+ 𝑡
322
+ and
323
+ 𝐳
324
+ 𝑠
325
+ . For different self-supervised learning pipelines, the learning objective function
326
+
327
+ can be different. For example, for SimCLR [9], it is the contrastive loss function. For Self-distillation with no labels (DINO) [12], it is a summed cross entropy loss function.
328
+
329
+ IVProposed Approach
330
+ Figure 3:Framework overview of the proposed Cross-level Multi-instance Distillation (CMD) for self-supervised fine-grained visual categorization. The proposed framework follows the self-distillation with no labels (DINO) [12] paradigm. After feature extraction, three key steps are involved, namely, multi-instance modeling (Sec. IV-A), intra-level multi-instance distillation (Sec. IV-B with
331
+
332
+ 𝐼
333
+ and
334
+
335
+ 𝑅
336
+ ) and inter-level multi-instance distillation (Sec. IV-C with
337
+
338
+ 𝑇
339
+ and
340
+
341
+ 𝑆
342
+ ).
343
+
344
+ Fig. 3 gives an overview of the proposed Cross-level Multi-instance Distillation (CMD) for self-supervised FGVC. On top of the Self-distillation with no labels (DINO) [12] paradigm, three key steps are involved in CMD, namely, multi-instance modeling (in Sec. IV-A), intra-level multi-instance distillation (in Sec. IV-B) and inter-level multi-instance distillation (in Sec. IV-C). Finally, more implementation details and loss functions are provided in Sec. IV-D.
345
+
346
+ The proposed CMD is distinct from the common patch- and image- level losses in multiple aspects. Firstly, our integration of patch and image-level losses is under the rigorous multiple instance learning (MIL) formulation, which provides an implicit constraint of the fine-grained semantics on both region- and image- level crops. Secondly, directly applying patch- and image- level losses could not resolve the learning objective inconsistency problem in self-supervised FGVC. CMD innovatively resolves it by intra- and inter- level distillation over both patch- and image- level losses. It allows self-supervised learning to be optimized by the patch-level objective while at the same time be constrained by the image-level context. Thirdly, CMD makes a new exploration to leverage both patch- and image- level information for the DINO paradigm, and to advance MIL for both patch- and image- level information under self-supervised learning.
347
+
348
+ IV-AMulti-instance Modeling
349
+ IV-A1Motivation
350
+
351
+ Self-supervised FGVC needs to bridge the relation between several highlighted key patches of the image and the fine-grained semantics. To this end, MIL is introduced to model each patch of the image as an instance and the overall image as a bag. Although MIL has demonstrated great success in weakly-supervised learning scenarios, some unique challenges remain in our task.
352
+
353
+ IV-A2Challenges
354
+
355
+ Usually, there is bag-level annotation under MIL formulation [18, 19, 20, 66]. However, for self-supervised FGVC, there is neither bag-level nor instance-level annotation. Thus, the bag representation has to be learned in a self-supervised manner. In addition, to comprehensively exploit the relation between key patches and fine-grained semantics, it is necessary to inherit both region-level and image-level crops in existing SSL pipelines [21]. For simplicity, here we model both region-level and image-level crops as bag.
356
+
357
+ IV-A3Notations & Definitions
358
+
359
+ Given an image
360
+ 𝐗
361
+ as input, and two augmented views
362
+ 𝐗
363
+ 𝑡
364
+ and
365
+ 𝐗
366
+ 𝑠
367
+ for the teacher net
368
+ 𝑡
369
+ and student net
370
+ 𝑠
371
+ , respectively. For both nets, some crops are image-level (denoted as
372
+
373
+ ) and some crops are region-level (denoted as
374
+
375
+ ). For convenience, we formulate an instance
376
+ 𝐱
377
+ 𝑠
378
+ 𝑖
379
+ and
380
+ 𝐱
381
+ 𝑡
382
+ 𝑖
383
+ from an augmented view
384
+ 𝐗
385
+ 𝑠
386
+ and
387
+ 𝐗
388
+ 𝑡
389
+ is an image patch of the same size as the patch size in a Vision Transformer backbone, denoted as
390
+ 𝐗
391
+ 𝑠
392
+ =
393
+ {
394
+ 𝐱
395
+ 𝑠
396
+ 1
397
+ ,
398
+
399
+ ,
400
+ 𝐱
401
+ 𝑠
402
+ 𝑖
403
+ ,
404
+
405
+ }
406
+ and
407
+ 𝐗
408
+ 𝑡
409
+ =
410
+ {
411
+ 𝐱
412
+ 𝑡
413
+ 1
414
+ ,
415
+
416
+ ,
417
+ 𝐱
418
+ 𝑡
419
+ 𝑖
420
+ ,
421
+
422
+ }
423
+ . For both teacher and student net, the Swin Transformer [67] is used as the default backbone for feature extraction. A linear layer
424
+
425
+
426
+ (
427
+
428
+ )
429
+ is used to transform the patch-wise embedding
430
+ 𝐑
431
+ 𝑡
432
+ and
433
+ 𝐑
434
+ 𝑠
435
+ into a set of instance embedding
436
+ 𝐳
437
+ 𝐱
438
+ 𝑡
439
+ 𝑖
440
+
441
+
442
+ (
443
+ 𝑤
444
+
445
+
446
+ )
447
+ ×
448
+ 1
449
+ or
450
+ 𝐳
451
+ 𝐱
452
+ 𝑠
453
+ 𝑖
454
+
455
+
456
+ (
457
+ 𝑤
458
+
459
+
460
+ )
461
+ ×
462
+ 1
463
+ .
464
+
465
+ IV-A4Multiple Instance Aggregation Function
466
+
467
+ The key to model the relation between instance and bag is the aggregation function, which has to be permutation invariant to the order change of instances [16, 17]. Take the teacher net as an example, the mean pooling based aggregation function is used to learn both the region-level bag embedding
468
+ 𝐳
469
+ 𝑡
470
+
471
+
472
+ (
473
+ 𝑤
474
+
475
+
476
+ )
477
+ ×
478
+ 1
479
+ and image-level bag embedding
480
+ 𝐳
481
+ ¯
482
+ 𝑡
483
+
484
+
485
+ (
486
+ 𝑤
487
+
488
+
489
+ )
490
+ ×
491
+ 1
492
+ , given by
493
+
494
+
495
+ 𝐳
496
+ 𝑡
497
+ =
498
+ 1
499
+
500
+
501
+
502
+ 𝐱
503
+ 𝑡
504
+ 𝑖
505
+
506
+
507
+ 𝐳
508
+ 𝐱
509
+ 𝑡
510
+ 𝑖
511
+ ,
512
+ 𝐳
513
+ ¯
514
+ 𝑡
515
+ =
516
+ 1
517
+
518
+
519
+
520
+ 𝐱
521
+ 𝑡
522
+ 𝑖
523
+
524
+
525
+ 𝐳
526
+ 𝐱
527
+ 𝑡
528
+ 𝑖
529
+ .
530
+
531
+ (4)
532
+
533
+ For student network, the aggregation from the instance representation to the bag representation is the same as the above process, and we get both crop-level bag representation
534
+ 𝐳
535
+ 𝑠
536
+
537
+
538
+ (
539
+ 𝑤
540
+
541
+
542
+ )
543
+ ×
544
+ 1
545
+ and image-level bag representation
546
+ 𝐳
547
+ ¯
548
+ 𝑠
549
+
550
+
551
+ (
552
+ 𝑤
553
+
554
+
555
+ )
556
+ ×
557
+ 1
558
+ .
559
+
560
+ IV-A5Permutation Invariance Property
561
+
562
+ Here we provide the proof that, in the proposed framework, the aggregation function from both region-level and image-level crops to bag embedding meets the permutation invariance property, which is both sufficient and necessary to hold the multiple instance formation.
563
+
564
+ Theorem 1. The transformation from instance embedding to bag embedding of the region-crop (Eq. 5) is permutation-invariant.
565
+
566
+ Proof. Take the teacher net
567
+ 𝑡
568
+ as an example. In the proposed framework, the transformation between instance embedding
569
+ 𝐳
570
+ 𝐱
571
+ 𝑡
572
+ and bag embedding
573
+ 𝐳
574
+ 𝑡
575
+ of a region crop
576
+
577
+ is calculated as
578
+
579
+
580
+ 𝐳
581
+ 𝑡
582
+ =
583
+ 1
584
+
585
+
586
+
587
+ 𝐱
588
+ 𝑡
589
+ 𝑘
590
+
591
+
592
+ 𝐳
593
+ 𝐱
594
+ 𝑡
595
+ 𝑘
596
+ ,
597
+
598
+ (5)
599
+
600
+ where we have
601
+
602
+ =
603
+ {
604
+ 𝐱
605
+ 𝑡
606
+ 1
607
+ ,
608
+
609
+ ,
610
+ 𝐱
611
+ 𝑡
612
+ 𝑘
613
+ ,
614
+
615
+ ,
616
+ 𝐱
617
+ 𝑡
618
+ 𝑅
619
+ }
620
+ , and the instances
621
+ {
622
+ 𝐱
623
+ 𝑡
624
+ 1
625
+ ,
626
+
627
+ ,
628
+ 𝐱
629
+ 𝑡
630
+ 𝑘
631
+ ,
632
+
633
+ ,
634
+ 𝐱
635
+ 𝑡
636
+ 𝑅
637
+ }
638
+ correspond to the instance embeddings
639
+ {
640
+ 𝐳
641
+ 𝐱
642
+ 𝑡
643
+ 1
644
+ ,
645
+
646
+ ,
647
+ 𝐳
648
+ 𝐱
649
+ 𝑡
650
+ 𝑘
651
+ ,
652
+
653
+ ,
654
+ 𝐳
655
+ 𝐱
656
+ 𝑡
657
+ 𝑅
658
+ }
659
+ .
660
+
661
+ Assume
662
+ {
663
+ 𝐱
664
+ 𝑡
665
+ 𝜎
666
+
667
+ (
668
+ 1
669
+ )
670
+ ,
671
+
672
+ ,
673
+ 𝐱
674
+ 𝑡
675
+ 𝜎
676
+
677
+ (
678
+ 𝑘
679
+ )
680
+ ,
681
+
682
+ ,
683
+ 𝐱
684
+ 𝑡
685
+ 𝜎
686
+
687
+ (
688
+ 𝑅
689
+ )
690
+ }
691
+ is an arrangement of
692
+ {
693
+ 𝐱
694
+ 𝑡
695
+ 1
696
+ ,
697
+
698
+ ,
699
+ 𝐱
700
+ 𝑡
701
+ 𝑘
702
+ ,
703
+
704
+ ,
705
+ 𝐱
706
+ 𝑡
707
+ 𝑅
708
+ }
709
+ , which corresponds to the instance embeddings
710
+ {
711
+ 𝐳
712
+ 𝐱
713
+ 𝑡
714
+ 𝜎
715
+
716
+ (
717
+ 1
718
+ )
719
+ ,
720
+
721
+ ,
722
+ 𝐳
723
+ 𝐱
724
+ 𝑡
725
+ 𝜎
726
+
727
+ (
728
+ 𝑘
729
+ )
730
+ ,
731
+
732
+ ,
733
+ 𝐳
734
+ 𝐱
735
+ 𝑡
736
+ 𝜎
737
+
738
+ (
739
+ 𝑅
740
+ )
741
+ }
742
+ .
743
+
744
+ A permutation transformation
745
+ 𝑡
746
+ of the instances is a process presented as
747
+
748
+
749
+ 𝑡
750
+ :
751
+ {
752
+ 𝐱
753
+ 𝑡
754
+ 1
755
+ ,
756
+
757
+ ,
758
+ 𝐱
759
+ 𝑡
760
+ 𝑘
761
+ ,
762
+
763
+ ,
764
+ 𝐱
765
+ 𝑡
766
+ 𝑅
767
+ }
768
+
769
+ {
770
+ 𝐱
771
+ 𝑡
772
+ 𝜎
773
+
774
+ (
775
+ 1
776
+ )
777
+ ,
778
+
779
+ ,
780
+ 𝐱
781
+ 𝑡
782
+ 𝜎
783
+
784
+ (
785
+ 𝑘
786
+ )
787
+ ,
788
+
789
+ ,
790
+ 𝐱
791
+ 𝑡
792
+ 𝜎
793
+
794
+ (
795
+ 𝑅
796
+ )
797
+ }
798
+ .
799
+
800
+ (6)
801
+
802
+ Obviously, we have
803
+
804
+
805
+ 𝐳
806
+ 𝑡
807
+ =
808
+ 1
809
+
810
+
811
+
812
+ 𝐱
813
+ 𝑡
814
+ 𝑘
815
+
816
+
817
+ 𝐳
818
+ 𝐱
819
+ 𝑡
820
+ 𝑘
821
+ =
822
+ 1
823
+
824
+
825
+
826
+ 𝐱
827
+ 𝑡
828
+ 𝜎
829
+
830
+ (
831
+ 𝑘
832
+ )
833
+
834
+
835
+ 𝐳
836
+ 𝐱
837
+ 𝑡
838
+ 𝜎
839
+
840
+ (
841
+ 𝑘
842
+ )
843
+ .
844
+
845
+ (7)
846
+
847
+ Theorem 2. The transformation from instance embedding to bag embedding of the image-crop (Eq. 5) is permutation-invariant.
848
+
849
+ Proof. The proof is similar to the proof of Theorem 1. Still take the teacher net
850
+ 𝑡
851
+ as an example. In the proposed framework, the transformation between instance embedding
852
+ 𝐳
853
+ 𝐱
854
+ 𝑡
855
+ and bag embedding
856
+ 𝐳
857
+ ¯
858
+ 𝑡
859
+ of a region crop
860
+
861
+ is calculated as
862
+
863
+
864
+ 𝐳
865
+ ¯
866
+ 𝑡
867
+ =
868
+ 1
869
+
870
+
871
+
872
+ 𝐱
873
+ 𝑡
874
+ 𝑘
875
+
876
+
877
+ 𝐳
878
+ 𝐱
879
+ 𝑡
880
+ 𝑘
881
+ ,
882
+
883
+ (8)
884
+
885
+ where we have
886
+
887
+ =
888
+ {
889
+ 𝐱
890
+ 𝑡
891
+ 1
892
+ ,
893
+
894
+ ,
895
+ 𝐱
896
+ 𝑡
897
+ 𝑘
898
+ ,
899
+
900
+ ,
901
+ 𝐱
902
+ 𝑡
903
+ 𝐼
904
+ }
905
+ , and the instances
906
+ {
907
+ 𝐱
908
+ 𝑡
909
+ 1
910
+ ,
911
+
912
+ ,
913
+ 𝐱
914
+ 𝑡
915
+ 𝑘
916
+ ,
917
+
918
+ ,
919
+ 𝐱
920
+ 𝑡
921
+ 𝐼
922
+ }
923
+ correspond to the instance embeddings
924
+ {
925
+ 𝐳
926
+ 𝐱
927
+ 𝑡
928
+ 1
929
+ ,
930
+
931
+ ,
932
+ 𝐳
933
+ 𝐱
934
+ 𝑡
935
+ 𝑘
936
+ ,
937
+
938
+ ,
939
+ 𝐳
940
+ 𝐱
941
+ 𝑡
942
+ 𝐼
943
+ }
944
+ .
945
+
946
+ Assume
947
+ {
948
+ 𝐱
949
+ 𝑡
950
+ 𝜎
951
+
952
+ (
953
+ 1
954
+ )
955
+ ,
956
+
957
+ ,
958
+ 𝐱
959
+ 𝑡
960
+ 𝜎
961
+
962
+ (
963
+ 𝑘
964
+ )
965
+ ,
966
+
967
+ ,
968
+ 𝐱
969
+ 𝑡
970
+ 𝜎
971
+
972
+ (
973
+ 𝐼
974
+ )
975
+ }
976
+ is an arrangement of
977
+ {
978
+ 𝐱
979
+ 𝑡
980
+ 1
981
+ ,
982
+
983
+ ,
984
+ 𝐱
985
+ 𝑡
986
+ 𝑘
987
+ ,
988
+
989
+ ,
990
+ 𝐱
991
+ 𝑡
992
+ 𝐼
993
+ }
994
+ , which corresponds to the instance embeddings
995
+ {
996
+ 𝐳
997
+ 𝐱
998
+ 𝑡
999
+ 𝜎
1000
+
1001
+ (
1002
+ 1
1003
+ )
1004
+ ,
1005
+
1006
+ ,
1007
+ 𝐳
1008
+ 𝐱
1009
+ 𝑡
1010
+ 𝜎
1011
+
1012
+ (
1013
+ 𝑘
1014
+ )
1015
+ ,
1016
+
1017
+ ,
1018
+ 𝐳
1019
+ 𝐱
1020
+ 𝑡
1021
+ 𝜎
1022
+
1023
+ (
1024
+ 𝐼
1025
+ )
1026
+ }
1027
+ .
1028
+
1029
+ A permutation transformation
1030
+ 𝑡
1031
+ of the instances is a process presented as
1032
+
1033
+
1034
+ 𝑡
1035
+ :
1036
+ {
1037
+ 𝐱
1038
+ 𝑡
1039
+ 1
1040
+ ,
1041
+
1042
+ ,
1043
+ 𝐱
1044
+ 𝑡
1045
+ 𝑘
1046
+ ,
1047
+
1048
+ ,
1049
+ 𝐱
1050
+ ����
1051
+ 𝐼
1052
+ }
1053
+
1054
+ {
1055
+ 𝐱
1056
+ 𝑡
1057
+ 𝜎
1058
+
1059
+ (
1060
+ 1
1061
+ )
1062
+ ,
1063
+
1064
+ ,
1065
+ 𝐱
1066
+ 𝑡
1067
+ 𝜎
1068
+
1069
+ (
1070
+ 𝑘
1071
+ )
1072
+ ,
1073
+
1074
+ ,
1075
+ 𝐱
1076
+ 𝑡
1077
+ 𝜎
1078
+
1079
+ (
1080
+ 𝐼
1081
+ )
1082
+ }
1083
+ .
1084
+
1085
+ (9)
1086
+
1087
+ Obviously, we have
1088
+
1089
+
1090
+ 𝐳
1091
+ ¯
1092
+ 𝑡
1093
+ =
1094
+ 1
1095
+
1096
+
1097
+
1098
+ 𝐱
1099
+ 𝑡
1100
+ 𝑘
1101
+
1102
+
1103
+ 𝐳
1104
+ 𝐱
1105
+ 𝑡
1106
+ 𝑘
1107
+ =
1108
+ 1
1109
+
1110
+
1111
+
1112
+ 𝐱
1113
+ 𝑡
1114
+ 𝜎
1115
+
1116
+ (
1117
+ 𝑘
1118
+ )
1119
+
1120
+
1121
+ 𝐳
1122
+ 𝐱
1123
+ 𝑡
1124
+ 𝜎
1125
+
1126
+ (
1127
+ 𝑘
1128
+ )
1129
+ .
1130
+
1131
+ (10)
1132
+ IV-BIntra-level Multi-instance Distillation
1133
+ IV-B1Motivation
1134
+
1135
+ In the feature space, the distance between the region-level bag representation from teacher net
1136
+ 𝐳
1137
+ 𝑡
1138
+ and from student net
1139
+ 𝐳
1140
+ 𝑠
1141
+ , and the distance between the image-level bag representation from teacher net
1142
+ 𝐳
1143
+ ¯
1144
+ 𝑡
1145
+ and from student net
1146
+ 𝐳
1147
+ ¯
1148
+ 𝑠
1149
+ , both needs to be minimized. In this way, the cropped image pairs / cropped region pairs that represent similar fine-grained categories are close together, while those represent dissimilar fine-grained categories are far apart.
1150
+
1151
+ The intra-level multi-instance distillation is proposed to realize the above two objectives, which distillates image-level crops (Eq. 11) and region-level crops (Eq. 12), respectively.
1152
+
1153
+ IV-B2Notations & Definitions
1154
+
1155
+ For both teacher net
1156
+ 𝑡
1157
+ and student net
1158
+ 𝑠
1159
+ , after view augmentation, there are multiple view crops (both region crops and image crops) from an image, denoted as
1160
+ 𝐕
1161
+ =
1162
+ {
1163
+ 𝐗
1164
+ 𝑡
1165
+ }
1166
+ and
1167
+ 𝐕
1168
+
1169
+ =
1170
+ {
1171
+ 𝐗
1172
+ 𝑠
1173
+ }
1174
+ , respectively. Assume that the crop pairs from the teacher and student net are organized as
1175
+ 𝒫
1176
+ =
1177
+ {
1178
+ (
1179
+ 𝑠
1180
+ ,
1181
+ 𝑡
1182
+ )
1183
+ |
1184
+ 𝐗
1185
+ 𝑠
1186
+
1187
+ 𝐕
1188
+ ,
1189
+ 𝐗
1190
+ 𝑡
1191
+
1192
+ 𝐕
1193
+
1194
+ ,
1195
+ 𝑠
1196
+
1197
+ 𝑡
1198
+ }
1199
+ .
1200
+
1201
+ IV-B3Image-level Multi-instance Distillation
1202
+
1203
+ After modeled by MIL, the image-level bag representation from teacher net
1204
+ 𝐳
1205
+ ¯
1206
+ 𝑡
1207
+ and student net
1208
+ 𝐳
1209
+ ¯
1210
+ 𝑠
1211
+ is minimized pair-by-pair, given by
1212
+
1213
+
1214
+
1215
+
1216
+ =
1217
+
1218
+ 1
1219
+ 𝒫
1220
+
1221
+
1222
+ (
1223
+ 𝑠
1224
+ ,
1225
+ 𝑡
1226
+ )
1227
+
1228
+ 𝒫
1229
+ 𝐳
1230
+ ¯
1231
+ 𝑠
1232
+
1233
+ 𝑙
1234
+
1235
+ 𝑜
1236
+
1237
+ 𝑔
1238
+
1239
+ 𝐳
1240
+ ¯
1241
+ 𝑡
1242
+ .
1243
+
1244
+ (11)
1245
+ IV-B4Region-level Multi-instance Distillation
1246
+
1247
+ Similarly, after modeled by MIL, the region-level bag representation from teacher net
1248
+ 𝐳
1249
+ 𝑡
1250
+ and from student net
1251
+ 𝐳
1252
+ 𝑠
1253
+ is minimized firstly region-by-region for each pair
1254
+ (
1255
+ 𝑡
1256
+ ,
1257
+ 𝑠
1258
+ )
1259
+ and then averaged pair-by-pair, given by
1260
+
1261
+
1262
+
1263
+
1264
+ =
1265
+
1266
+ 1
1267
+ 𝒫
1268
+
1269
+
1270
+ (
1271
+ 𝑠
1272
+ ,
1273
+ 𝑡
1274
+ )
1275
+
1276
+ 𝒫
1277
+ 1
1278
+ 𝑇
1279
+
1280
+
1281
+ 𝑖
1282
+ =
1283
+ 1
1284
+ 𝑇
1285
+ 𝐳
1286
+ 𝑗
1287
+
1288
+
1289
+ 𝑙
1290
+
1291
+ 𝑜
1292
+
1293
+ 𝑔
1294
+
1295
+ 𝐳
1296
+ 𝑖
1297
+ ,
1298
+
1299
+ (12)
1300
+
1301
+ where
1302
+ 𝑗
1303
+
1304
+ is the index of the feature in
1305
+ 𝐑
1306
+ 𝑡
1307
+ that has the highest similarity with the
1308
+ 𝑖
1309
+ 𝑡
1310
+
1311
+
1312
+ feature in
1313
+ 𝐑
1314
+ 𝑠
1315
+ (defined in Sec. IV-A), and
1316
+ 𝑇
1317
+ is the length of the patch-wise embedding
1318
+
1319
+
1320
+ 𝑗
1321
+
1322
+ =
1323
+ 𝑎
1324
+
1325
+ 𝑟
1326
+
1327
+ 𝑔
1328
+
1329
+ 𝑚
1330
+
1331
+ 𝑎
1332
+
1333
+ 𝑥
1334
+ 𝑗
1335
+ 𝑅
1336
+ 𝑖
1337
+ 𝑇
1338
+
1339
+ 𝑅
1340
+ 𝑗
1341
+ |
1342
+ 𝑅
1343
+ 𝑖
1344
+ |
1345
+
1346
+ |
1347
+ 𝑅
1348
+ 𝑗
1349
+ |
1350
+ ,
1351
+
1352
+ (13)
1353
+
1354
+ where
1355
+ 𝑧
1356
+ 𝑖
1357
+ =
1358
+
1359
+
1360
+ (
1361
+ 𝑅
1362
+ 𝑖
1363
+ )
1364
+ and
1365
+ 𝑧
1366
+ 𝑗
1367
+ =
1368
+
1369
+
1370
+ (
1371
+ 𝑅
1372
+ 𝑗
1373
+ )
1374
+ , and
1375
+ 𝑅
1376
+ 𝑖
1377
+
1378
+ 𝐑
1379
+ 𝑠
1380
+ ,
1381
+ 𝑅
1382
+ 𝑗
1383
+
1384
+ 𝐑
1385
+ 𝑡
1386
+ .
1387
+
1388
+ IV-CInter-level Multi-instance Distillation
1389
+ IV-C1Motivation
1390
+
1391
+ The intra-level multi-instance distillation (in Sec. IV-B) does not take the semantic relation between the region-level crops and image-level crops from the same augmented view into account. More abundant information to describe the key regions can rest in the relation between region-level and image-level crops from the same augmented view. Ideally, in our framework, the fine-grained semantics should be consistent between both image-level and region-level crops.
1392
+
1393
+ To this end, we further propose the inter-level multi-instance distillation to exploit this aspect. It helps the cropped image-region pairs that represent similar fine-grained categories are close together, while those represent dissimilar fine-grained categories are far apart.
1394
+
1395
+ IV-C2Notations & Definitions
1396
+
1397
+ Still consider the crops
1398
+ 𝐗
1399
+ 𝑠
1400
+
1401
+ 𝐕
1402
+ inside the student net and the crops
1403
+ 𝐗
1404
+ 𝑡
1405
+
1406
+ 𝐕
1407
+
1408
+ inside the teacher net. These crops
1409
+ 𝐗
1410
+ 𝑠
1411
+
1412
+ 𝐕
1413
+ from the student net organize a crop pair as
1414
+ 𝒱
1415
+ =
1416
+ {
1417
+ (
1418
+ 𝑖
1419
+ ,
1420
+ 𝑗
1421
+ )
1422
+ |
1423
+ 𝐗
1424
+ 𝑠
1425
+ 𝑖
1426
+
1427
+ 𝐕
1428
+ ,
1429
+ 𝐗
1430
+ 𝑠
1431
+ 𝑗
1432
+
1433
+ 𝐕
1434
+ ,
1435
+ 𝑖
1436
+
1437
+ 𝑗
1438
+ }
1439
+ , where
1440
+ 𝐗
1441
+ 𝑠
1442
+ 𝑖
1443
+ ,
1444
+ 𝐗
1445
+ 𝑠
1446
+ 𝑗
1447
+ correspond to either a region-level bag embedding
1448
+ 𝐳
1449
+ 𝑠
1450
+ 𝑖
1451
+ ,
1452
+ 𝐳
1453
+ 𝑠
1454
+ 𝑗
1455
+ or an image-level bag embedding
1456
+ 𝐳
1457
+ ¯
1458
+ 𝑠
1459
+ ,
1460
+ 𝐳
1461
+ ¯
1462
+ 𝑡
1463
+ .
1464
+
1465
+ IV-C3Inter-level Multi-instance Distillation of Student Net
1466
+
1467
+ For student net
1468
+ 𝑠
1469
+ , the embedding between each crop pair, either image-level or region-level, is minimized by the Kullback–Leibler divergence [68], given by
1470
+
1471
+
1472
+
1473
+ 𝒮
1474
+ =
1475
+ 1
1476
+ 𝒱
1477
+
1478
+
1479
+ (
1480
+ 𝑖
1481
+ ,
1482
+ 𝑗
1483
+ )
1484
+
1485
+ 𝒱
1486
+ (
1487
+ 𝐳
1488
+ 𝑠
1489
+ 𝑖
1490
+
1491
+ 𝑙
1492
+
1493
+ 𝑜
1494
+
1495
+ 𝑔
1496
+
1497
+ 𝐳
1498
+ 𝑠
1499
+ 𝑖
1500
+ 𝐳
1501
+ 𝑠
1502
+ 𝑗
1503
+ +
1504
+ 𝐳
1505
+ 𝑠
1506
+ 𝑖
1507
+
1508
+ 𝑙
1509
+
1510
+ 𝑜
1511
+
1512
+ 𝑔
1513
+
1514
+ 𝐳
1515
+ 𝑠
1516
+ 𝑖
1517
+ 𝐳
1518
+ ¯
1519
+ 𝑠
1520
+ 𝑗
1521
+ +
1522
+ 𝐳
1523
+ ¯
1524
+ 𝑠
1525
+ 𝑖
1526
+
1527
+ 𝑙
1528
+
1529
+ 𝑜
1530
+
1531
+ 𝑔
1532
+
1533
+ 𝐳
1534
+ ¯
1535
+ 𝑠
1536
+ 𝑖
1537
+ 𝐳
1538
+ 𝑠
1539
+ ��
1540
+ +
1541
+ 𝐳
1542
+ ¯
1543
+ 𝑠
1544
+ 𝑖
1545
+
1546
+ 𝑙
1547
+
1548
+ 𝑜
1549
+
1550
+ 𝑔
1551
+
1552
+ 𝐳
1553
+ ¯
1554
+ 𝑠
1555
+ 𝑖
1556
+ 𝐳
1557
+ ¯
1558
+ 𝑠
1559
+ 𝑗
1560
+ )
1561
+ .
1562
+
1563
+ (14)
1564
+ IV-C4Inter-level Multi-instance Distillation of Teacher Net
1565
+
1566
+ Similarly, for the teacher net
1567
+ 𝑡
1568
+ , we have the crop pair
1569
+ 𝒱
1570
+
1571
+ =
1572
+ {
1573
+ (
1574
+ 𝑖
1575
+ ,
1576
+ 𝑗
1577
+ )
1578
+ |
1579
+ 𝐗
1580
+ 𝑡
1581
+ 𝑖
1582
+
1583
+ 𝐕
1584
+
1585
+ ,
1586
+ 𝐗
1587
+ 𝑡
1588
+ 𝑗
1589
+
1590
+ 𝐕
1591
+
1592
+ ,
1593
+ 𝑖
1594
+
1595
+ 𝑗
1596
+ }
1597
+ , where
1598
+ 𝐗
1599
+ 𝑡
1600
+ 𝑖
1601
+ ,
1602
+ 𝐗
1603
+ 𝑡
1604
+ 𝑗
1605
+ correspond to either a region-level bag embedding
1606
+ 𝐳
1607
+ 𝑠
1608
+ 𝑖
1609
+ ,
1610
+ 𝐳
1611
+ 𝑠
1612
+ 𝑗
1613
+ or an image-level bag embedding
1614
+ 𝐳
1615
+ ¯
1616
+ 𝑠
1617
+ ,
1618
+ 𝐳
1619
+ ¯
1620
+ 𝑡
1621
+ . The learning objective of the inter-level multi-instance distillation for the teacher net is also measured by Kullback–Leibler divergence [68], and can be computed as
1622
+
1623
+
1624
+
1625
+ 𝒯
1626
+ =
1627
+ 1
1628
+ 𝒱
1629
+
1630
+
1631
+
1632
+ (
1633
+ 𝑖
1634
+ ,
1635
+ 𝑗
1636
+ )
1637
+
1638
+ 𝒱
1639
+
1640
+ (
1641
+ 𝐳
1642
+ 𝑡
1643
+ 𝑖
1644
+
1645
+ 𝑙
1646
+
1647
+ 𝑜
1648
+
1649
+ 𝑔
1650
+
1651
+ 𝐳
1652
+ 𝑡
1653
+ 𝑖
1654
+ 𝐳
1655
+ 𝑡
1656
+ 𝑗
1657
+ +
1658
+ 𝐳
1659
+ 𝑡
1660
+ 𝑖
1661
+
1662
+ 𝑙
1663
+
1664
+ 𝑜
1665
+
1666
+ 𝑔
1667
+
1668
+ 𝐳
1669
+ 𝑡
1670
+ 𝑖
1671
+ 𝐳
1672
+ ¯
1673
+ 𝑡
1674
+ 𝑗
1675
+ +
1676
+ 𝐳
1677
+ ¯
1678
+ 𝑡
1679
+ 𝑖
1680
+
1681
+ 𝑙
1682
+
1683
+ 𝑜
1684
+
1685
+ 𝑔
1686
+
1687
+ 𝐳
1688
+ ¯
1689
+ 𝑡
1690
+ 𝑖
1691
+ 𝐳
1692
+ 𝑡
1693
+ 𝑗
1694
+ +
1695
+ 𝐳
1696
+ ¯
1697
+ 𝑡
1698
+ 𝑖
1699
+
1700
+ 𝑙
1701
+
1702
+ 𝑜
1703
+
1704
+ 𝑔
1705
+
1706
+ 𝐳
1707
+ ¯
1708
+ 𝑡
1709
+ 𝑖
1710
+ 𝐳
1711
+ ¯
1712
+ 𝑡
1713
+ 𝑗
1714
+ )
1715
+ .
1716
+
1717
+ (15)
1718
+ IV-DLoss and Implementation Details
1719
+ Algorithm 1 Pseudo code of the proposed Cross-level Multi-instance Distillation, PyTorch Style
1720
+ 1:
1721
+ 𝑡
1722
+ ,
1723
+ 𝑠
1724
+ : teacher and student net;
1725
+
1726
+
1727
+ (
1728
+
1729
+ )
1730
+ : linear layer
1731
+ 2:
1732
+ 𝐗
1733
+ ,
1734
+ 𝐗
1735
+ 𝑣
1736
+ ,
1737
+ 𝐗
1738
+ 𝑟
1739
+ : origin input and two augmented views
1740
+ 3:
1741
+ 𝐶
1742
+
1743
+ 𝑖
1744
+ ,
1745
+ 𝐶
1746
+
1747
+ 𝑟
1748
+ : view and region center
1749
+ 4:
1750
+ 5:t.params = s.params
1751
+ 6:for
1752
+ 𝐗
1753
+ in loader:
1754
+ 7: 
1755
+ 𝐗
1756
+ 1
1757
+ ,
1758
+ 𝐗
1759
+ 2
1760
+ = augment(
1761
+ 𝐗
1762
+ ), augment(
1763
+ 𝐗
1764
+ )
1765
+ 8: 
1766
+ 𝐑
1767
+ 𝑡
1768
+ ,
1769
+ 𝐑
1770
+ 𝑠
1771
+ = t(
1772
+ 𝐗
1773
+ 1
1774
+ ), s(
1775
+ 𝐗
1776
+ 2
1777
+ )
1778
+ 9: 
1779
+ 𝐳
1780
+ 𝑥
1781
+ 𝑡
1782
+ ,
1783
+ 𝐳
1784
+ 𝑥
1785
+ 𝑠
1786
+ = h(
1787
+ 𝐑
1788
+ 𝑡
1789
+ ), h(
1790
+ 𝐑
1791
+ 𝑠
1792
+ )
1793
+ 10: for
1794
+ 𝐳
1795
+ 𝑥
1796
+ 𝑠
1797
+ ,
1798
+ 𝐳
1799
+ 𝑥
1800
+ 𝑡
1801
+ in
1802
+
1803
+ :  # bag embedding for region crops
1804
+ 11:   
1805
+ 𝐳
1806
+ 𝑠
1807
+ = mean({
1808
+ 𝐳
1809
+ 𝑥
1810
+ 𝑠
1811
+ }),
1812
+ 𝐳
1813
+ 𝑡
1814
+ = mean({
1815
+ 𝐳
1816
+ 𝑥
1817
+ 𝑡
1818
+ })
1819
+ 12: for
1820
+ 𝐳
1821
+ 𝑥
1822
+ 𝑠
1823
+ ,
1824
+ 𝐳
1825
+ 𝑥
1826
+ 𝑡
1827
+ in
1828
+
1829
+ :  # bag embedding for image crops
1830
+ 13:   
1831
+ 𝐳
1832
+ 𝑠
1833
+ ¯
1834
+ = mean({
1835
+ 𝐳
1836
+ 𝑥
1837
+ 𝑠
1838
+ }),
1839
+ 𝐳
1840
+ 𝑡
1841
+ ¯
1842
+ = mean({
1843
+ 𝐳
1844
+ 𝑥
1845
+ 𝑡
1846
+ })
1847
+ 14: # Intra-level Multi-instance Distillation
1848
+ 15: 
1849
+
1850
+ 𝐼
1851
+ =
1852
+ 𝐻
1853
+ 𝐼
1854
+ (
1855
+ 𝐳
1856
+ 𝑡
1857
+ ¯
1858
+ ,
1859
+ 𝐳
1860
+ 𝑠
1861
+ ¯
1862
+ )  # for image-level crops
1863
+ 16: 
1864
+
1865
+ 𝑅
1866
+ =
1867
+ 𝐻
1868
+ 𝑅
1869
+ (
1870
+ 𝐳
1871
+ 𝑡
1872
+ ,
1873
+ 𝐳
1874
+ 𝑠
1875
+ ,
1876
+ 𝐑
1877
+ 𝑡
1878
+ ,
1879
+ 𝐑
1880
+ 𝑠
1881
+ )  # for region-level crops
1882
+ 17: # Inter-level Multi-instance Distillation
1883
+ 18: 
1884
+
1885
+ 𝑇
1886
+ =
1887
+ 𝐻
1888
+ 𝑇
1889
+ (
1890
+ 𝐳
1891
+ 𝑡
1892
+ ,
1893
+ 𝐳
1894
+ 𝑡
1895
+ ¯
1896
+ )  # for region-image crops in
1897
+ 𝑡
1898
+ 19: 
1899
+
1900
+ 𝑆
1901
+ =
1902
+ 𝐻
1903
+ 𝑆
1904
+ (
1905
+ 𝐳
1906
+ 𝑠
1907
+ ,
1908
+ 𝐳
1909
+ 𝑠
1910
+ ¯
1911
+ )  # for region-image crops in
1912
+ 𝑠
1913
+ 20:
1914
+ 21: 
1915
+
1916
+ = (
1917
+
1918
+ 𝐼
1919
+ +
1920
+
1921
+ 𝑅
1922
+ )/2 +
1923
+ 𝜆
1924
+ (
1925
+
1926
+ 𝑇
1927
+ +
1928
+
1929
+ 𝑆
1930
+ )  # total loss
1931
+ 22: loss.backward()
1932
+ 23: update(s)  # update student, teacher and centers
1933
+ 24: t.params = a * t.params + (1-a) * s.params  # EMA
1934
+ 25: Ci = b * Ci + (1-b) * cat([
1935
+ 𝐑
1936
+ 𝑡
1937
+ ,
1938
+ 𝐑
1939
+ 𝑠
1940
+ ].mean(0))
1941
+ 26: # EMA for image center
1942
+ 27: Cr = b * Cr + (1-b) * cat([
1943
+ 𝐳
1944
+ 𝑡
1945
+ ,
1946
+ 𝐳
1947
+ 𝑠
1948
+ ].mean(0))
1949
+ 28: # EMA for region center
1950
+ 29:
1951
+ 30:def
1952
+ 𝐻
1953
+ 𝐼
1954
+ (s, t):
1955
+ 31: t = t.detach()  # stop gradient
1956
+ 32: s = softmax(s / tmp_s, dim=-1)
1957
+ 33: t = softmax((t - Ci) / tmp_t, dim=-1)
1958
+ 34: return - (t * log(s)).sum(dim=-1).mean()
1959
+ 35:
1960
+ 36:def
1961
+ 𝐻
1962
+ 𝑅
1963
+ (
1964
+ 𝐑
1965
+ 𝑠
1966
+ ,
1967
+ 𝐑
1968
+ 𝑡
1969
+ ,
1970
+ 𝐳
1971
+ 𝑠
1972
+ ,
1973
+ 𝐳
1974
+ 𝑡
1975
+ ):
1976
+ 37: 
1977
+ 𝐑
1978
+ 𝑡
1979
+ =
1980
+ 𝐑
1981
+ 𝑡
1982
+ .detach()    # stop gradient
1983
+ 38: 
1984
+ 𝐑
1985
+ 𝑠
1986
+ = softmax(
1987
+ 𝐑
1988
+ 𝑠
1989
+ / tmp_s, dim=-1)    
1990
+ 39: 
1991
+ 𝐑
1992
+ 𝑡
1993
+ = softmax(
1994
+ 𝐑
1995
+ 𝑡
1996
+ - Cr) / tmp_t, dim=-1)   
1997
+ 40: sim_matrix = torch.matmul(
1998
+ 𝐳
1999
+ 𝑠
2000
+ ,
2001
+ 𝐳
2002
+ 𝑡
2003
+ .permute(0, 2, 1))
2004
+ 41: sim_idx = sim_matrix.max(dim=-1)[1].unsqueeze(2)
2005
+ 42: 
2006
+ 𝐑
2007
+ 𝑡
2008
+ _idxed = torch.gather(
2009
+ 𝐑
2010
+ 𝑡
2011
+ , 1, sim_idx.expand(-1, -1, pt.size(2)))
2012
+ 43: return - (
2013
+ 𝐑
2014
+ 𝑡
2015
+ _idxed * log(
2016
+ 𝐑
2017
+ 𝑠
2018
+ )).sum(dim=-1).mean()
2019
+ 44:
2020
+ 45:def
2021
+ 𝐇
2022
+ 𝑇
2023
+ (
2024
+ 𝐳
2025
+ ,
2026
+ 𝐳
2027
+ ¯
2028
+ ):
2029
+ 46:  return F.kl_div(
2030
+ 𝐳
2031
+ ,
2032
+ 𝐳
2033
+ ¯
2034
+ )
2035
+ 47:
2036
+ 48:def
2037
+ 𝐇
2038
+ 𝑆
2039
+ (
2040
+ 𝐳
2041
+ ,
2042
+ 𝐳
2043
+ ¯
2044
+ ):
2045
+ 49:  return F.kl_div(
2046
+ 𝐳
2047
+ ,
2048
+ 𝐳
2049
+ ¯
2050
+ )
2051
+
2052
+ The overall learning objective of the proposed CMD framework is a combination of the intra-level and inter-level multi-instance distillation, computed as
2053
+
2054
+
2055
+
2056
+ =
2057
+ (
2058
+
2059
+
2060
+ +
2061
+
2062
+
2063
+ )
2064
+ /
2065
+ 2
2066
+ +
2067
+ 𝜆
2068
+ 1
2069
+
2070
+ (
2071
+
2072
+ 𝒮
2073
+ +
2074
+
2075
+ 𝒯
2076
+ )
2077
+ .
2078
+
2079
+ (16)
2080
+
2081
+ where
2082
+ 𝜆
2083
+ is a loss weight to balance the impact between the intra- and inter- level multi-instance distillation. Empirically, we set
2084
+ 𝜆
2085
+ 1
2086
+ =0.1. To intuitively demonstrate the specific steps and the implementation of the proposed CMD, a PyTorch-style algorithm is attached in Algorithm 1.
2087
+
2088
+ Following the convention of recent self-distillation with no labels paradigm (e.g.DINO [12], EsViT [21]), the parameters of teacher net and the student net are updated in an alternative manner. Given a frozen teacher net, the student net is updated by minimizing its full learning objective as
2089
+ 𝜃
2090
+ 𝑠
2091
+
2092
+ argmin
2093
+ 𝜃
2094
+ 𝑠
2095
+
2096
+
2097
+
2098
+ (
2099
+ 𝑠
2100
+ ,
2101
+ 𝑡
2102
+ ;
2103
+ 𝜃
2104
+ 𝑠
2105
+ )
2106
+ . Instead, the teacher net is updated by using an exponential moving average (EMA) from the weights of the student net, presented as
2107
+ 𝜃
2108
+ 𝑡
2109
+
2110
+ 𝜆
2111
+
2112
+ 𝜃
2113
+ 𝑡
2114
+ +
2115
+ (
2116
+ 1
2117
+
2118
+ 𝜆
2119
+ )
2120
+
2121
+ 𝜃
2122
+ 𝑠
2123
+ . Here
2124
+ 𝜆
2125
+ is a weight parameter that is parameterized by a cosine schedule, and varies from 0.996 to 1.
2126
+
2127
+ By default, both the teacher net and the student net have 2 image-level crops and 8 region-level crops. The image-level crops have a size of
2128
+ 224
2129
+ ×
2130
+ 224
2131
+ ×
2132
+ 3
2133
+ , the region-level crops have a size of
2134
+ 96
2135
+ ×
2136
+ 96
2137
+ ×
2138
+ 3
2139
+ , and each instance for multiple instance modeling has a size of
2140
+ 32
2141
+ ×
2142
+ 32
2143
+ ×
2144
+ 3
2145
+ . Consequently, each image-level crop has 49 instances, and each region-level crop has 9 instances.
2146
+
2147
+ Both the teacher net and the student net use the Swin-Tiny backbone [67] as the feature extractor, and the pre-trained weights from ImageNet are used as the initial weights. To train the entire framework, the initial learning rate is
2148
+ 5
2149
+ ×
2150
+ 10
2151
+
2152
+ 4
2153
+ , and the weight decay is set 0.05. The AdamW optimizer is utilized and the model training terminates after 300 epochs.
2154
+
2155
+ IV-ETheoretical Analysis
2156
+
2157
+ Overall, SSL uses image-level learning objective, but FGVC requires region-level learning objective to discern tiny discriminative patterns, which easily leads to the ambiguous gradient problem. This subsection provides a conceptual and high-level discussion of this problem.
2158
+
2159
+ Given a dataset of
2160
+ 𝑁
2161
+ unlabeled samples. After feature extraction, assume each sample is represented by
2162
+ 𝐱
2163
+ 1
2164
+ ,
2165
+ 𝐱
2166
+ 2
2167
+ ,
2168
+
2169
+ ,
2170
+ 𝐱
2171
+ 𝑁
2172
+ , where
2173
+ 𝑖
2174
+ is the
2175
+ 𝑖
2176
+ -th samples, and
2177
+ 𝐱
2178
+ 𝑖
2179
+
2180
+
2181
+ 1
2182
+ ×
2183
+ (
2184
+ 𝑤
2185
+
2186
+
2187
+
2188
+ 𝑐
2189
+ )
2190
+ . Here
2191
+ (
2192
+ 𝑤
2193
+
2194
+
2195
+
2196
+ 𝑐
2197
+ )
2198
+ proportional to the width, height and channel of deep features, and are high dimensional.
2199
+
2200
+ For conventional SSL (e.g., SimCLR, MoCo), the learning objective
2201
+
2202
+ 𝑆
2203
+
2204
+ 𝑆
2205
+
2206
+ 𝐿
2207
+ is to minimize the distance between
2208
+ 𝐱
2209
+ 𝑖
2210
+ and a sample
2211
+ 𝐱
2212
+ +
2213
+ of the same category, while at the same time maximize the distance between
2214
+ 𝐱
2215
+ 𝑖
2216
+ and a sample
2217
+ 𝐱
2218
+
2219
+ of a different category, given by
2220
+
2221
+
2222
+
2223
+ 𝑆
2224
+
2225
+ 𝑆
2226
+
2227
+ 𝐿
2228
+ =
2229
+
2230
+ 𝑖
2231
+ 𝑁
2232
+ (
2233
+ min
2234
+ Dist
2235
+
2236
+ (
2237
+ 𝐱
2238
+ i
2239
+ ,
2240
+ 𝐱
2241
+ +
2242
+ )
2243
+ +
2244
+ max
2245
+ Dist
2246
+
2247
+ (
2248
+ 𝐱
2249
+ i
2250
+ ,
2251
+ 𝐱
2252
+
2253
+ )
2254
+ )
2255
+ ,
2256
+
2257
+ (17)
2258
+
2259
+ where
2260
+ Dist
2261
+ is the distance metric, e.g.,
2262
+ 𝑙
2263
+ -2 loss, KL-loss.
2264
+
2265
+ For FGVC, still consider the above dataset. The difference is, these
2266
+ 𝑁
2267
+ samples are from the same coarse-category (e.g., dog), and they have much closer distance in the feature space.
2268
+
2269
+ The discern between fine-grained categories is to use tiny and subtle feature
2270
+ 𝐱
2271
+ 𝑖
2272
+ 1
2273
+
2274
+
2275
+ 1
2276
+ ×
2277
+ (
2278
+ 𝑤
2279
+ 1
2280
+
2281
+
2282
+ 1
2283
+
2284
+ 𝑐
2285
+ )
2286
+ from
2287
+ 𝐱
2288
+ 1
2289
+
2290
+
2291
+ 1
2292
+ ×
2293
+ (
2294
+ 𝑤
2295
+
2296
+
2297
+
2298
+ 𝑐
2299
+ )
2300
+ for categorization, where only a minor
2301
+ 𝑤
2302
+ 1
2303
+ and
2304
+
2305
+ 1
2306
+ from the width and height dimensions are useful, and
2307
+ 𝑤
2308
+ 1
2309
+
2310
+
2311
+ 1
2312
+
2313
+ 𝑤
2314
+
2315
+
2316
+ . Let we denote the rest feature as
2317
+ 𝐱
2318
+ 𝑖
2319
+ 2
2320
+
2321
+
2322
+ 1
2323
+ ×
2324
+ (
2325
+ (
2326
+ 𝑤
2327
+
2328
+ 𝑤
2329
+ 1
2330
+ )
2331
+
2332
+ (
2333
+ 𝑤
2334
+
2335
+
2336
+ 1
2337
+ )
2338
+
2339
+ 𝑐
2340
+ )
2341
+ .
2342
+
2343
+ In image space, given an
2344
+ 256
2345
+ ×
2346
+ 256
2347
+ image, the fine-grained patterns may only occupies tens of pixels (e.g., 50). In feature space, when both width and height are down-sampled (e.g., into a quarter), the fine-grained patterns
2348
+ 𝐱
2349
+ 𝑖
2350
+ 1
2351
+ may only occupies a handful of pixels (e.g., 3 pixels), while the overall feature map
2352
+ 𝐱
2353
+ 𝑖
2354
+ has a size of
2355
+ 64
2356
+ ×
2357
+ 64
2358
+ . The estimation in feature space is
2359
+ (
2360
+ 𝑤
2361
+ 1
2362
+
2363
+
2364
+ 1
2365
+ )
2366
+ /
2367
+ (
2368
+ 𝑤
2369
+
2370
+
2371
+ )
2372
+
2373
+ 3
2374
+ /
2375
+ (
2376
+ 64
2377
+ ×
2378
+ 64
2379
+ )
2380
+
2381
+ 7.3
2382
+ ×
2383
+ 10
2384
+
2385
+ 4
2386
+ .
2387
+
2388
+ When using conventional SSL for FGVC, it implements Eq.1 where
2389
+ 𝐱
2390
+ 𝑖
2391
+ 1
2392
+ has dominant impact. The gradient is:
2393
+
2394
+
2395
+
2396
+
2397
+ 𝑆
2398
+
2399
+ 𝑆
2400
+
2401
+ 𝐿
2402
+
2403
+ 𝐱
2404
+ 𝑖
2405
+
2406
+ =
2407
+
2408
+
2409
+ 𝑆
2410
+
2411
+ 𝑆
2412
+
2413
+ 𝐿
2414
+
2415
+ 𝐱
2416
+ 𝑖
2417
+ 1
2418
+
2419
+
2420
+ 𝐱
2421
+ 𝑖
2422
+ 1
2423
+
2424
+ 𝐱
2425
+ 𝑖
2426
+
2427
+ (18)
2428
+
2429
+
2430
+ =
2431
+
2432
+ (
2433
+ min
2434
+ Dist
2435
+
2436
+ (
2437
+ 𝐱
2438
+ i
2439
+ ,
2440
+ 𝐱
2441
+ +
2442
+ )
2443
+ )
2444
+
2445
+ 𝐱
2446
+ 𝑖
2447
+ 1
2448
+
2449
+
2450
+ 𝐱
2451
+ 𝑖
2452
+ 1
2453
+
2454
+ 𝐱
2455
+ 𝑖
2456
+
2457
+
2458
+ +
2459
+
2460
+ (
2461
+ max
2462
+ Dist
2463
+
2464
+ (
2465
+ 𝐱
2466
+ i
2467
+ ,
2468
+ 𝐱
2469
+
2470
+ )
2471
+ )
2472
+
2473
+ 𝐱
2474
+ 𝑖
2475
+ 1
2476
+
2477
+
2478
+ 𝐱
2479
+ 𝑖
2480
+ 1
2481
+
2482
+ 𝐱
2483
+ 𝑖
2484
+ ,
2485
+
2486
+
2487
+ The dramatic ration
2488
+ (
2489
+ 𝑤
2490
+ 1
2491
+
2492
+
2493
+ 1
2494
+ )
2495
+ /
2496
+ (
2497
+ 𝑤
2498
+
2499
+
2500
+ )
2501
+ leads to a constant value in most of the channels when calculating the gradient
2502
+
2503
+ 𝐱
2504
+ 𝑖
2505
+ 1
2506
+
2507
+ 𝐱
2508
+ 𝑖
2509
+ . In fact, according to the above estimation, only 0.07% channels are informative for fine-grained patterns. It is more likely to cause ambiguous gradient, yielding a less discriminative feature space for FGVC.
2510
+
2511
+ In contrast, we formulate each image patch as an instance to better discern fine-grained pattern
2512
+ 𝐱
2513
+ 𝑖
2514
+ 1
2515
+ . In our proposed CMD, it is not based on the sample-wise learning objective, but instance-wise learning objective, given by:
2516
+
2517
+
2518
+
2519
+ 𝐶
2520
+
2521
+ 𝑀
2522
+
2523
+ 𝐷
2524
+ =
2525
+
2526
+ 𝑖
2527
+ 𝑁
2528
+ (
2529
+ min
2530
+ Dist
2531
+
2532
+ (
2533
+ 𝐱
2534
+ i
2535
+ 1
2536
+ ,
2537
+ 𝐱
2538
+ 1
2539
+ +
2540
+ )
2541
+ +
2542
+ max
2543
+ Dist
2544
+
2545
+ (
2546
+ 𝐱
2547
+ i
2548
+ 2
2549
+ ,
2550
+ 𝐱
2551
+ 2
2552
+
2553
+ )
2554
+ )
2555
+ ,
2556
+
2557
+ (19)
2558
+
2559
+ Consequently, the impact of fine-grained patterns
2560
+ 𝐱
2561
+ 𝑖
2562
+ 1
2563
+ is:
2564
+
2565
+
2566
+
2567
+
2568
+ 𝐶
2569
+
2570
+ 𝑀
2571
+
2572
+ 𝐷
2573
+
2574
+ 𝐱
2575
+ 𝑖
2576
+ =
2577
+
2578
+ min
2579
+ Dist
2580
+
2581
+ (
2582
+ 𝐱
2583
+ i
2584
+ 1
2585
+ ,
2586
+ 𝐱
2587
+ 1
2588
+ +
2589
+ )
2590
+
2591
+ 𝐱
2592
+ 𝑖
2593
+ 1
2594
+ .
2595
+
2596
+ (20)
2597
+
2598
+ The negligible term
2599
+ 𝐱
2600
+ 𝑖
2601
+ 1
2602
+
2603
+ 𝐱
2604
+ 𝑖
2605
+ that leads to ambiguous gradients is eliminated in the proposed CMD, which helps learn a more discriminative FGVC feature space. In addition, the calculation of
2606
+
2607
+ 𝐱
2608
+ 𝑖
2609
+ 1
2610
+
2611
+ 𝐱
2612
+ 𝑖
2613
+ can be specified by the distance function
2614
+ Dist
2615
+ of a specified SSL pipeline.
2616
+
2617
+ VExperimental Analysis
2618
+ V-ADataset & Evaluation Protocols
2619
+
2620
+ CUB-200-20111 is a FGVC dataset that contains 200 bird species. It has 5,994 training samples and 5,794 test samples.
2621
+
2622
+ Stanford Cars 2 contains 196 fine-grained categories, with 8,144 training samples and 8,041 test samples.
2623
+
2624
+ FGVC Aircraft 3 contains 102 fine-grained aircraft categories. It has a total number of 6,667 training samples and 3,333 test samples.
2625
+
2626
+ NA-BIRDS 4 contains 555 fine-grained bird species. It has a total number of 23,929 training samples and 24,633 test samples.
2627
+
2628
+ For simplicity, these datasets are denoted as CUB, CAR, AIR and NAB, respectively. All our experiments follow the default training/testing split of these datasets. Following the evaluation protocol of the prior self-supervised fine-grained visual categorization work [14], the training set of a fine-grained dataset is used to implement the self-supervised pre-training for the proposed CMD. After that, the pre-trained self-supervised fine-grained representation is fine-tuned in a linear way to support FGVC.
2629
+
2630
+ Following [14], the linear probing from the self-supervised representation is evaluated by the top-1 accuracy. On the other hand, the image retrieval setting is also used for evaluation. As a kind of nearest neighbor classification, it helps benchmark whether the pre-trained class-agnostic representation learned by SSL is discriminative for fine-grained categories. Rank-1, Rank-5 and mAP are reported. Following [14], given the learned features, one image per time is used as the query image to retrieve the rest images in the test set. This operation goes through all the samples in the test set, and the average is reported.
2631
+
2632
+ V-BComparison with Self-supervised Methods
2633
+
2634
+ The proposed method is compared with a variety of self-supervised visual representation learning methods (SimCLR [9], BYOL [43], MoCo v2 [11], DINO [12], BarlowTwins [44], SimSiam [10], MAE [69], EsViT [21], VICReg [45], I-JEPA [47], EPM [48]), a self-supervised fine-grained category discovery method InfoSieve [42] and a contemporary self-supervised FGVC method LCR [14].
2635
+
2636
+ V-B1On Linear Probing
2637
+
2638
+ Table I reports the top-1 accuracy of the proposed method and existing self-supervised learning methods. On both CUB and CAR, it leads to a performance gain of 10.14% and 11.13% against the second best-performed method LCR [14], which is a contemporary self-supervised FGVC method. At least more than 20% accuracy gain can be observed when compared with other self-supervised visual representation learning methods. On AIR, it leads to a performance gain of 5.80% against the second best-performed method EsViT [21]. Also, it shows a 8.52% performance gain against LCR [14], and shows at least more than 12% performance gain against other self-supervised visual learning methods. Similarly, on NAB, it shows a 25.60% improvement against the second-best method, and around 40% improvement against other self-supervised visual representation learning methods. On the other hand, when using ResNet-50 backbone that has inferior feature representation ability than Swin Transformer, the proposed CMD still significantly outperforms all the compared methods on all the three datasets by up to 5.07%. When using ViT-B-16 backbone, the proposed CMD outperforms the other latest methods with the same backbone by up to 9.62%, 12.67%, 5.68% and 25.55% on CUB, CAR, AIR and NAB, respectively.
2639
+
2640
+ TABLE I:Comparison with the other state-of-the-art self-supervised learning frameworks with linear probing Top-1 accuracy (in %), on the CUB, CAR, AIR and NAB datasets. By default, the compared results are directly cited from [14].
2641
+
2642
+ : Our implementation by official code and default parameter settings. ’*’: Only one decimal result is reported. ’-’: Neither official implementation nor source code. Parameters (Params., in million) & throughput (im/s) are both of the feature extractor, and are calculated on a NVIDIA V100 GPU with 128 samples per forward. Results marked in bold, in red and in blue are the best results from all the compared methods, ViT-B-16 backbone and ResNet-50 backbone.
2643
+ Method Proc. & Year Architectures Para. im/s Linear Top-1 Accuracy
2644
+ CUB CAR AIR NAB
2645
+ Supervised CVPR2016 ResNet-50 23 1237 81.34 91.02 87.13 82.23
2646
+ ICCV2021 Swin-Tiny
2647
+
2648
+ 28 808 91.09 93.59 93.43 87.46
2649
+ DINO [12] CVPR2021 ResNet-50 23 1237 16.66 10.51 12.93 10.98
2650
+
2651
+
2652
+ Barlow [44] ICML2021 ResNet-50 23 1237 33.45 31.91 34.77 25.36
2653
+
2654
+
2655
+ VICReg [45] ICLR2022 ResNet-50 23 1237 37.78 30.80 36.00 32.49
2656
+
2657
+
2658
+ SimCLR [9] ICML2020 ResNet-50 23 1237 38.39 49.41 45.22 38.71
2659
+
2660
+
2661
+ BYOL [43] NeurIPS2020 ResNet-50 23 1237 39.27 45.21 37.62 31.83
2662
+
2663
+
2664
+ SimSiam [10] CVPR2021 ResNet-50 23 1237 39.97 58.89 43.06 33.40
2665
+
2666
+
2667
+ MAE [69] CVPR2022 ResNet-50 23 1237 7.68
2668
+
2669
+ 12.46
2670
+
2671
+ 14.85
2672
+
2673
+ 11.56
2674
+
2675
+
2676
+ MoCo v2 [11] ArXiv2020 ResNet-50 23 1237 68.30 58.43 52.54 47.17
2677
+
2678
+
2679
+ BYOL+LCR [14] CVPR2023 ResNet-50 23 1237 51.20 50.64 45.94 -
2680
+ EsViT [21] ICLR2022 Swin-Tiny 28 808 61.67
2681
+
2682
+ 58.25
2683
+
2684
+ 59.59
2685
+
2686
+ 43.55
2687
+
2688
+
2689
+ LCR [14] CVPR2023 ResNet-50 23 1237 71.31 60.75 55.87 50.94
2690
+
2691
+
2692
+ I-JEPA [47] CVPR2023 ViT-B-16 85 321 54.09
2693
+
2694
+ 49.66
2695
+
2696
+ 48.83
2697
+
2698
+ 39.62
2699
+
2700
+
2701
+ EPM [48] CVPR2023 ViT-B-16 85 321 57.36
2702
+
2703
+ 51.65
2704
+
2705
+ 49.61
2706
+
2707
+ 41.38
2708
+
2709
+
2710
+ InfoSieve [42] NeurIPS2024 ViT-B-16 85 321 69.4* 55.7* 56.3* 48.95
2711
+
2712
+
2713
+ CMD (Ours) 2024 ResNet-50 23 1237 76.38 65.49 60.21 73.13
2714
+ ViT-B-16 85 321 79.02 68.37 61.98 74.50
2715
+ Swin-Tiny 28 808 81.45 71.87 64.39 76.54
2716
+ V-B2On Image Retrieval
2717
+
2718
+ Table II reports the image retrieval outcomes of the proposed method, the contemporary LCR [14], two SSL methods (MoCo v2 [11], EsViT [21]) and the fully-supervised ResNet-50. On CUB, the proposed method significantly outperforms all the compared methods, with a 20.19% gain of Rank-1, 13.43% gain of Rank-5 and 6.45% gain of mAP, respectively. Similarly, on AIR, it leads to a performance gain of 9.34% on Rank-1, 6.83% on Rank-5 and 13.18% on mAP, respectively. On Cars, its Rank-1 and Rank-5 are slightly inferior to LCR [14]. However, its mAP metric achieves 16.21%, which is 7.34% higher than LCR [14]. On NAB, it significantly outperforms all the compared methods. Specifically, the improvement over the second-best is 16.95% on Rank-1, 16.69% on Rank-5 and 19.27% on mAP, respectively.
2719
+
2720
+ The inferior performance of the proposed CMD than LCR [14] may be explained from the less realistic samples in CAR. Specifically, many samples in CAR are synthetic, where the background of a car is artificially set to pure white. As the proposed CMD relies more on the per-instance responses in an image, the artificially low contrast in CAR makes the instance representation less discriminative, leading to a slight performance degradation. In contrast, the experimental outcomes on the NAB dataset further demonstrate the effectiveness of the proposed CMD on large-scale and photo-realistic images, which is more suitable to the real-world FGVC scenarios.
2721
+
2722
+ TABLE II:Performance comparison between the proposed method and other methods under the image retrieval setting. Evaluation metrics include Rank-1, Rank-5 and mAP (presented in percentage %). Experiments are conducted on the CUB, CAR, AIR and NAB datasets.
2723
+ Dataset Method Architectures Retrieval
2724
+ Rank-1 Rank-5 mAP
2725
+ CUB Supervised ResNet-50 10.65 29.32 5.09
2726
+ MoCo v2 [11] ResNet-50 17.07 41.46 8.13
2727
+ EsViT [21] Swin-Tiny 44.88 72.83 20.54
2728
+ LCR [14] ResNet-50 49.69 75.23 24.01
2729
+ CMD (Ours) Swin-Tiny 69.88 88.66 30.46
2730
+ Cars Supervised ResNet-50 4.91 16.98 2.34
2731
+ MoCo v2 [11] ResNet-50 10.94 29.57 3.12
2732
+ EsViT [21] Swin-Tiny 23.15 45.97 4.54
2733
+ LCR [14] ResNet-50 34.56 60.75 8.87
2734
+ CMD (Ours) Swin-Tiny 32.66 53.91 16.21
2735
+ Aircraft Supervised ResNet-50 5.16 14.22 2.61
2736
+ MoCo v2 [11] ResNet-50 19.38 39.90 6.30
2737
+ EsViT [21] Swin-Tiny 23.15 45.97 4.54
2738
+ LCR [14] ResNet-50 34.33 61.09 15.43
2739
+ CMD (Ours) Swin-Tiny 43.67 67.92 29.61
2740
+ NAB Supervised ResNet-50 5.08 17.29 2.57
2741
+ MoCo v2 [11] ResNet-50 16.51 37.29 5.08
2742
+ EsViT [21] Swin-Tiny 37.89 61.92 16.75
2743
+ LCR [14] ResNet-50 46.73 68.95 25.68
2744
+ CMD (Ours) Swin-Tiny 63.68 85.64 44.95
2745
+ V-CCompared with Other Alternative Solutions
2746
+
2747
+ Some localization based self-supervised learning pipelines may also learn fine-grained representation. For a more boarder comparison, these recent self-supervised localization pipelines [14, 51, 52, 53, 54, 55] are also benchmarked on self-supervised FGVC. Table III reports their performance. The proposed method significantly outperforms all the compared methods in terms of the top-1 accuracy on the three benchmarks, superior to the second best-performed method by 8.61%, 8.16% and 8.31%, respectively. On NAB, it shows a 24.19% improvement against the second-best method, and around 30% improvement against other self-supervised visual representation learning methods.
2748
+
2749
+ Under the image retrieval setting, it outperforms the second best-performed method by 20.52% and 8.72% on CUB and AIR, respectively. On Cars, its Rank-1 metric is slightly inferior to LCR [14] and its two variations, but is still significantly higher than other methods by
2750
+ >
2751
+ 20%. On the other hand, when using the same ResNet-50 backbone, it still outperforms all the compared methods on the top-1 accuracy on all three datasets, and the Rank-1 metric on CUB and AIR. On NAB, it significantly outperforms all the compared methods. Specifically, the improvement over the second-best is 31.76% on Rank-1.
2752
+
2753
+ TABLE III:Comparison with the alternatives from self-supervised localization with linear probing Top-1 accuracy (in %) and Rank-1 retrieval (in %), on CUB-200-2011, Stanford Cars, FGVC Aircraft and NA-Birds. Results marked in bold and in blue are the best results from all the compared methods and the best results from ResNet-50 backbone. By default, the compared results are directly cited from [14].
2754
+
2755
+ : Our implementation by official code and default parameter settings; ’-’: Neither report nor with official source code.
2756
+ Method Architectures Linear Top-1 Classification Rank-1 Retrieval
2757
+ CUB CAR AIR NAB CUB CAR AIR NAB
2758
+ MoCo v2 + Bilinear [14] ResNet-50 68.44 58.06 53.01 46.72
2759
+
2760
+ 41.27 30.89 30.80 19.89
2761
+
2762
+
2763
+ SAM-SSL [51] ResNet-50 68.59 58.49 52.97 45.14
2764
+
2765
+ 18.38 14.26 21.72 19.08
2766
+
2767
+
2768
+ SAM-SSL + Bilinear [14] ResNet-50 71.56 59.12 55.12 48.63
2769
+
2770
+ 44.20 35.38 32.10 22.36
2771
+
2772
+
2773
+ DiLo [52] ResNet-50 + FPN 64.14 - - - - - - -
2774
+ CVSA [53] ResNet-50 + BYOL [43] 65.02 - - - - - - -
2775
+ LEWEL [54] ResNet-50 69.27 59.02 54.33 48.39
2776
+
2777
+ 19.23 12.01 20.67 19.57
2778
+
2779
+
2780
+ ContrastiveCrop [55] ResNet-50 68.82 61.66 54.40 49.16
2781
+
2782
+ 18.71 13.61 20.88 20.19
2783
+
2784
+
2785
+ LCR + MultiTask [14] ResNet-50 72.84 63.71 56.08 52.35
2786
+
2787
+ 49.36 33.55 34.95 31.92
2788
+
2789
+
2790
+ CMD (Ours) ResNet-50 76.38 65.49 60.21 73.13 60.08 31.95 39.56 59.35
2791
+ Swin-Tiny 81.45 71.87 64.39 76.54 69.88 32.66 43.67 63.68
2792
+ TABLE IV:Ablation studies on each component. Metric Top-1 accuracy (in %) on CUB, CAR and AIR.
2793
+ Component Linear Top-1 Accuracy
2794
+
2795
+
2796
+ 𝑉
2797
+
2798
+ 𝑀
2799
+
2800
+ 𝐼
2801
+
2802
+ 𝐿
2803
+
2804
+
2805
+ 𝑅
2806
+
2807
+
2808
+ 𝑇
2809
+
2810
+
2811
+ 𝑆
2812
+ CUB CAR AIR
2813
+ ✓ 43.72 46.03 50.68
2814
+ ✓ ✓ 48.61 51.15 53.95
2815
+ ✓ ✓ ✓ 61.98 60.51 59.85
2816
+ ✓ ✓ ✓ ✓ 65.72 62.96 61.05
2817
+ ✓ ✓ ✓ ✓ ✓ 81.45 71.87 64.39
2818
+ TABLE V:Ablation studies on the label proportions of CMD and other methods on CUB and AIR.
2819
+ Dataset Method label proportion
2820
+ 100% 50% 20%
2821
+ CUB ResNet-50 68.17 58.99 46.54
2822
+ MoCo v2 [11] 68.30 60.96 46.91
2823
+ LCR [14] 71.31 66.52 55.33
2824
+ CMD (Ours) 81.45 76.44 63.70
2825
+ +10.14 +9.92 +8.37
2826
+ AIR ResNet-50 47.38 37.83 28.20
2827
+ MoCo v2 [11] 52.54 45.52 35.17
2828
+ LCR [14] 55.87 48.22 38.55
2829
+ CMD (Ours) 64.39 55.56 41.74
2830
+ +8.52 +7.34 +3.19
2831
+ TABLE VI:Impact of
2832
+ 𝜆
2833
+ 1
2834
+ on linear probing of the proposed CMD. Top-1 accuracy is presented in (
2835
+ %
2836
+ ). Experiments conducted on CUB.
2837
+ 𝜆
2838
+ 1
2839
+ 10 1 0.1 0.01 0.001
2840
+ top-1 acc. 78.03 80.82 81.45 81.36 79.25
2841
+ V-DAblation Studies
2842
+ V-D1On Each Component
2843
+
2844
+ Table IV reports the each component’s impact on the top-1 accuracy for the entire framework. The baseline is a DINO based framework that only has two image-level crops for both student and teacher net, denoted as
2845
+
2846
+ 𝑉
2847
+ . On top of it, the impact of multi-instance learning (
2848
+ 𝑀
2849
+
2850
+ 𝐼
2851
+
2852
+ 𝐿
2853
+ ), region-level crops (
2854
+
2855
+ 𝑅
2856
+ ), inter-level distillation for teacher net (
2857
+
2858
+ 𝑇
2859
+ ), and inter-level distillation for student net (
2860
+
2861
+ 𝑆
2862
+ ), are all studied. It can be seen that, the introduce of MIL leads to a performance gain of 4.89%, 5.12% and 3.27% on CUB, CAR, AIR, respectively. The introduce of region-level crops can significantly improve the fine-grained representation, yielding a performance gain of 13.37% and 9.36% on CUB and Cars, respectively. It indicates that the joint learning of image-level and region-level crops, although straight-forward, significantly benefits the fine-grained representation.
2863
+
2864
+ On the other hand, the inter-level multi-instance distillation on teacher net (denoted as
2865
+
2866
+ 𝑇
2867
+ ) leads to a performance gain of 3.74%, 2.45% and 1.20% on CUB, CAR and AIR, respectively, while inter-level multi-instance distillation on student net (denoted as
2868
+
2869
+ 𝑆
2870
+ ) leads to a performance gain of 15.73%, 8.91% and 3.34% on CUB, CAR and AIR.
2871
+
2872
+ To better understand the much more significant improvement of
2873
+
2874
+ 𝑆
2875
+ than
2876
+
2877
+ 𝑇
2878
+ , an additional t-SNE visualization for the feature space is shown in Fig. 4. Specifically, we randomly select eight fine-grained categories for each dataset, and extract the feature embeddings from
2879
+
2880
+ 𝑇
2881
+ and
2882
+
2883
+ 𝑇
2884
+ +
2885
+
2886
+ 𝑆
2887
+ , respectively. The inter-level multi-instance distillation for the student net (
2888
+
2889
+ 𝑆
2890
+ ) allows the feature embeddings from different fine-grained categories (in different color types) to be more separated, which significantly improves the feature separation of fine-grained categories.
2891
+
2892
+ V-D2Intra-level v.s. Inter-level Distillation
2893
+
2894
+ Another important aspect to analyze the contribution of each component is through the proposed Intra-level and Inter-level Multi-instance Distillation, according to the per-component results in Table IV. The intra-level multi-instance distillation, consisting of
2895
+
2896
+ 𝑉
2897
+ ,
2898
+ 𝑀
2899
+
2900
+ 𝐼
2901
+
2902
+ 𝐿
2903
+ and
2904
+
2905
+ 𝑅
2906
+ , leads to a top-1 accuracy improvement of 18.26%, 14.48% and 9.17% on CUB, CAR and AIR, respectively. In contrast, the inter-level multi-instance distillation, consisting of
2907
+
2908
+ 𝑇
2909
+ ,
2910
+ 𝑀
2911
+
2912
+ 𝐼
2913
+
2914
+ 𝐿
2915
+ and
2916
+
2917
+ 𝑆
2918
+ , leads to a top-1 accuracy improvement of 19.47%, 11.36% and 4.54% on CUB, CAR and AIR, respectively. Overall, both intra- and inter- level multi-instance distillation lead to a clear performance improvement for self-supervised fine-grained visual categorization. On AIR dataset, the impact of intra-level multi-instance distillation is slightly higher than inter-level multi-instance distillation (9.17% v.s. 4.54%), which may be explained from the less fine-grained categories in AIR than the rest two datasets. The less fine-grained categories of a dataset demands a representation to be more discriminative to the image- and region- level crops, instead of the semantic relation between each fine-grained categories. In fact, the intra-level multi-instance distillation focuses on the relation between image- and region- level crops, which plays a more pivotal role.
2919
+
2920
+ V-D3On Region-level Crop Number
2921
+
2922
+ By default the region-level crop number for both the teacher net and student net is 8. To extensively investigate the impact of region-level crop number, we test the scenarios when the framework has 2, 4, 6, 8, 10 and 12 region-level crops, and report the results on both CUB and Cars. The results are shown in Fig. 5 (a) and (b), respectively. When the region-level crop numbers are very few, e.g., 2 and 4, the performance of the proposed CMD degrades greatly, e.g., 66.71% and 71.46% on CUB. When the region-level crop number becomes larger, the performance of the proposed CMD becomes more stable, and achieves its maximum on the size of 8.
2923
+
2924
+ Figure 4:T-SNE visualization of the features space when only implementing inter-level multi-instance distillation on teacher net (
2925
+
2926
+ 𝑇
2927
+ ) and on both teacher and student net (
2928
+
2929
+ 𝑇
2930
+ +
2931
+
2932
+ 𝑆
2933
+ ).
2934
+ Figure 5: Ablation studies on region-level crop number and instance size.
2935
+ Figure 6: Activation comparison of the Fine-grained patterns between the proposed CMD and existing SSL methods. Following existing FGVC [22, 23, 24, 25, 26], the last-layer feature from the backbone is visualized by GradCAM.
2936
+ Figure 7:By the formulation of multiple instance learning, the key instances (in red rectangular) in both region-level and image-level crops are properly activated.
2937
+ V-D4On Instance Sizes
2938
+
2939
+ By default the instance size is set
2940
+ 32
2941
+ ×
2942
+ 32
2943
+ ×
2944
+ 3
2945
+ for the framework. To investigate the robustness of the multi-instance modeling, we systematically test the situation when the instance size is 2, 4, 8, 16 and 32, respectively. Note that the instance size has to be the common divisor of 96 and 224, so that 32 is the max common divisor that the size can reach. The results on CUB and CAR are shown in Fig. 5. When the instance size is 2 or 4, the CMD collapses and does not yield meaningful results. Also, when the instance size is 8, the performance declines to 72.75% and 65.49% on CUB and Cars. These observations can be explained from Curse of Dimensionality, as the too-small instance size significantly increases the dimension of feature embedding. In contrast, when the instance size is larger, e.g., 16 or 32, the performance of CMD becomes stable owing to the proper size of feature embedding.
2946
+
2947
+ Figure 8: Some failure cases, caused by occlusion, low contrast and varied viewpoints.
2948
+ V-D5On Different Label Proportions
2949
+
2950
+ When training the linear classifier, the number of training samples significantly impacts the performance. Table V reports the performance of the proposed method and the contemporary LCR [14], a self-supervised visual representation method MoCo v2 [11] and the fully-supervised ResNet-50. On CUB, when the label proportion is 100%, 50% and 20%, the proposed method shows a performance gain of 10.14%, 9.92% and 8.37% against the second best-performed method. On AIR, the performance gain against the second best-performed method is 8.52%, 7.34% and 3.19% under the proportion of 100%, 50% and 20%.
2951
+
2952
+ V-D6On Each Data Augmentation Operation
2953
+
2954
+ Modern SSL pipelines usually rely on multiple types of data augmentation to generate the crop pairs, so does the proposed CMD. Keeping the same as the prior works [21], five commonly-used data augmentation operations, namely, Random & Horizontal Flipping, Color Jittering, Random-resized Cropping, Gaussian Blurring and Solarization, are used in the proposed CMD. For simplicity, they are denoted as RF, CJ, RC, GB and SZ, respectively. An ablation study is conducted to analyze the impact of each operation. The results are reported in Table VII. In general, each of the five data augmentation operations positively contributes to the performance on self-supervised FGVC, yielding a total top-1 accuracy improvement of 3.51%, 3.31% and 3.12% on CUB, CAR and AIR, respectively. Specifically, the individual improvement of RF, CJ, RC, GB and SZ on CUB is 0.58%, 0.78%, 0.73%, 0.82% and 0.60%, respectively, in top-1 accuracy. The individual improvement of RF, CJ, RC, GB and SZ on CAR is 0.54%, 0.82%, 0.67%, 0.71% and 0.57%, respectively, in top-1 accuracy. The individual improvement of RF, CJ, RC, GB and SZ on AIR is 0.51%, 0.79%, 0.84%, 0.57% and 0.41%, respectively.
2955
+
2956
+ TABLE VII:Ablation studies on each data augmentation operation. Metric Top-1 accuracy (in %) on CUB, CAR and AIR.
2957
+ Operation Linear Top-1 Accuracy
2958
+
2959
+ 𝑅
2960
+
2961
+ 𝐹
2962
+
2963
+ 𝐶
2964
+
2965
+ 𝐽
2966
+
2967
+ 𝑅
2968
+
2969
+ 𝐶
2970
+
2971
+ 𝐺
2972
+
2973
+ 𝐵
2974
+
2975
+ 𝑆
2976
+
2977
+ 𝑍
2978
+ CUB CAR AIR
2979
+ 77.94 68.56 61.27
2980
+ ✓ 78.52 69.10 61.78
2981
+ ✓ ✓ 79.30 69.92 62.57
2982
+ ✓ ✓ ✓ 80.03 70.59 63.41
2983
+ ✓ ✓ ✓ ✓ 80.85 71.30 63.98
2984
+ ✓ ✓ ✓ ✓ ✓ 81.45 71.87 64.39
2985
+ V-ESensitivity Analysis of
2986
+ 𝜆
2987
+ 1
2988
+
2989
+ The learning objective of the proposed CMD is dependent on the
2990
+ 𝜆
2991
+ 1
2992
+ value in Eq. 16. As reported in Table. VI, we test the settings when
2993
+ 𝜆
2994
+ 1
2995
+ takes value from
2996
+ {
2997
+ 10
2998
+ ,
2999
+ 1
3000
+ ,
3001
+ 0.1
3002
+ ,
3003
+ 0.01
3004
+ ,
3005
+ 0.001
3006
+ }
3007
+ on the CUB-200-2011 dataset, and observe a top-1 accuracy (in percentage %) of
3008
+ {
3009
+ 78.03
3010
+ ,
3011
+ 80.82
3012
+ ,
3013
+ 81.45
3014
+ ,
3015
+ 81.36
3016
+ ,
3017
+ 79.25
3018
+ }
3019
+ .
3020
+
3021
+ The results indicate that, a too small
3022
+ 𝜆
3023
+ 1
3024
+ can reduce the impact of Inter-level Multi-instance Distillation (
3025
+
3026
+ 𝑆
3027
+ and
3028
+
3029
+ 𝑇
3030
+ ), while a too large
3031
+ 𝜆
3032
+ 1
3033
+ can overwhelm the impact of Intra-level Multi-instance Distillation (
3034
+
3035
+ 𝐼
3036
+ and
3037
+
3038
+ 𝑅
3039
+ ). Therefore, we choose a trade-off value 0.1 for
3040
+ 𝜆
3041
+ 1
3042
+ .
3043
+
3044
+ V-FGeneralization Ability Test
3045
+ V-F1On Different Backbones
3046
+
3047
+ Apart from the use of Swin-Tiny and ResNet-50 backbone in the main submission, we further test the generalization ability of the proposed CMD on other CNN and ViT backbones, including ResNet-101 [70], Convolutional vision Transformer (CvT) [71], Vision Longformer (ViL) [72] and Swin-Small [67].
3048
+
3049
+ The experiments are conducted on FGVC Aircraft dataset, and the results are shown in Tab. VIII. The proposed CMD, after linear probing, achieves a top-1 accuracy of 60.21%, 61.58%, 58.71%, 62.18%, 64.39% and 66.25% on the backbone of ResNet-50, ResNet-101, CvT, ViL, Swin-Tiny and Swin-Small, respectively. Generally, CMD shows a good generalization on a variety of CNN and ViT based backbones.
3050
+
3051
+ TABLE VIII:Generalization of the proposed CMD on different backbones. Top-1 accuracy is presented in (
3052
+ %
3053
+ ). Experiments conducted on AIR.
3054
+ Res-50 Res-101 CvT ViL Swin-T Swin-S
3055
+ acc. 60.21 61.58 58.71 62.18 64.39 66.25
3056
+ TABLE IX:Generalization ability on other types of self-supervised learning pipelines.
3057
+ 𝒰
3058
+ ,
3059
+ 𝒟
3060
+ , and
3061
+ 𝒞
3062
+ : cluster, knowledge distillation and contrastive learning based paradigm; Metric Top-1 accuracy (in %) on CUB, CAR and AIR.
3063
+ Method Category Backbone Linear Top-1 Accuracy
3064
+ CUB CAR AIR
3065
+ Suave [73]
3066
+ 𝒰
3067
+ ResNet-50 62.89 54.25 47.60
3068
+ Suave + CM (Ours)
3069
+ 𝒰
3070
+ 68.01 59.83 51.36
3071
+ Baseline (
3072
+
3073
+ 𝑉
3074
+ )
3075
+ 𝒟
3076
+ Swin-T 43.72 46.03 50.68
3077
+
3078
+
3079
+ 𝑉
3080
+ +
3081
+ 𝑀
3082
+
3083
+ 𝐼
3084
+
3085
+ 𝐿
3086
+
3087
+ 𝒟
3088
+ 48.61 51.15 53.95
3089
+ CM (Ours) +
3090
+
3091
+ 𝐶
3092
+
3093
+ 𝐿
3094
+
3095
+ 𝒞
3096
+ 75.38 65.87 59.62
3097
+ CMD (Ours)
3098
+ 𝒟
3099
+ 81.45 71.87 64.39
3100
+ V-F2On Other Self-Supervised Learning Paradigm
3101
+
3102
+ We further investigate the generalization of the proposed CMD on other self-supervised learning paradigms, namely, clustering (
3103
+ 𝒰
3104
+ ) and contrastive learning (
3105
+ 𝒞
3106
+ ). Two experiments are designed. The first experiment is under the cluster based paradigm. Suave [73] is used as the baseline, and we embed the cross-level multiple instance formulation (denoted as CM) before implementing clustering. The second experiment is under the contrastive learning paradigm. We keep the all the rest components in the CMD as the same, and only replace all the distillation losses by the contrastive losses (denoted as CM+
3107
+
3108
+ 𝐶
3109
+
3110
+ 𝐿
3111
+ ). The scenario that does not involve the cross-level losses (denoted as
3112
+
3113
+ 𝑉
3114
+ +
3115
+ 𝑀
3116
+
3117
+ 𝐼
3118
+
3119
+ 𝐿
3120
+ ) is also involved for comparison.
3121
+
3122
+ Table IX reports the results. Under the clustering paradigm, the propose cross-level multi-instance formulation leads to a top-1 accuracy improvement of 5.12%, 5.58% and 4.76% on CUB, CAR and AIR, respectively. Besides, the introduction of contrastive learning paradigm over the cross-level multiple instance formulation can also significantly improve the representation ability. Compared with the vanilla self-supervised knowledge distillation pipeline with MIL (
3123
+
3124
+ 𝑉
3125
+ +
3126
+ 𝑀
3127
+
3128
+ 𝐼
3129
+
3130
+ 𝐿
3131
+ ), it leads to a top-1 accuracy improvement of 26.77%, 14.72% and 4.77%, respectively.
3132
+
3133
+ V-F3Knowledge Distillation v.s. Contrastive Learning
3134
+
3135
+ The replacement of distillation with contrastive learning leads to a top-1 accuracy decline by 6.07%, 6.00% and 4.77% on CUB, CAR and AIR, respectively. These outcomes indicate that the contrastive learning paradigm is less effective than knowledge distillation for self-supervised FGVC.
3136
+
3137
+ The explanation may be two-fold. Firstly, self-supervised knowledge distillation does not rely on the concept of negative pairs or samples. In the context of FGVC, the differences between fine-grained categories are usually subtle, which lead to the confusion between positive and negative samples (e.g, in contrastive learning), and therefore degrade performance.Secondly, self-supervised knowledge distillation can learn more discriminative semantic information through the teacher-student network. The teacher network produces soft representations to guide the student network, which allows the model to further focus on more discriminative and detailed features and to better distinguish the subtle differences between fine-grained categories. In contrast, contrastive learning only separates the overall representations but does not focus on fine-grained differences.
3138
+
3139
+ V-GEffectiveness over Fully-supervised FGVC Methods
3140
+
3141
+ It is beneficial to investigate if the self-supervised FGVC methods, especially the proposed CMD, can improve the representation over existing fully-supervised FGVC methods. However, the fully-supervised FGVC methods usually consist of more modules and trainable parameters than the self-supervised methods, which solely use the image encoder along with a linear classifier. More modules and trainable parameters can fit the fine-grained category representation much better than the image encoder, which poses a predominant and unfair advantage over the self-supervised FGVC methods. Besides, for self-supervised FGVC methods, they are under a two-stage training pipeline, where the supervision of the category label is only involved in the second stage (i.e., fine-tuning the image encoder with only a linear classifier). In contrast, fully-supervised FGVC methods are directly trained in a one-stage paradigm with the supervision of the category label.
3142
+
3143
+ Therefore, we analyze and compare their effectiveness under the one-stage fully-supervised paradigm. The student net branch serves as the auxiliary branch in [39]. For the embedding from each crop, either region- or local- level, it was processed by a global maximum pooling followed by a multi-layer perceptron (MLP), and then supervised by the category label. These modifications keeps the same as [39] for fair evaluation. More implementation and architecture details can refer to [39]. The results are shown in Table X. With more trainable parameters, modules and direct supervision, the representation learned by the proposed CMD can also benefit FGVC under the full supervision setting. It achieves a top-1 accuracy of 92.0%, 95.6% and 94.1% on CUB, CAR and AIR, respectively. It significantly outperforms existing fully-supervised FGVC methods with the same ResNet-50 backbone. Besides, it achieves a competitive performance with more recent methods (e.g., SIM-OFE [74]), where both the methodology design and the ViT-B-16 backbone are more powerful and effective. These outcomes indicate the scalability and effectiveness of the proposed CMD to full supervision.
3144
+
3145
+ TABLE X:Comparison under the one-stage full supervision paradigm. The proposed CMD is attached with the global maximum pooling and multi-head MLP. Its teacher and student net is directly supervised by the category label as [39] does. Following the convention, results are directly cited from prior references, and reported in one decimal. ’-’: not reported. The best and second-best performance is highlighted in bold and in underline, respectively. Metric Top-1 accuracy (in %) on CUB, CAR and AIR.
3146
+ Method Backbone Linear Top-1 Accuracy
3147
+ CUB CAR AIR
3148
+ P-CNN [1] VGG-19 87.3 93.3 90.6
3149
+ DCAL [35] ViT-B-16 92.0 94.7 93.3
3150
+ P2P-Net [39] ResNet-50 90.2 95.4 94.2
3151
+ ABC-Norm [75] ResNet-50 87.8 - 93.2
3152
+ GCP [76] ResNet-50 92.5 92.5 91.0
3153
+ PMRC [77] DenseNet-101 91.5 95.2 94.0
3154
+ GTF [78] DenseNet-161 91.5 - -
3155
+ MP-FGVC [79] ViT-B-16 91.8 - -
3156
+ SIM-OFE [74] ViT-B-16 92.3 - -
3157
+ CMD + Full Supervision ResNet-50 92.0 95.6 94.1
3158
+ V-HVisualization
3159
+ V-H1On Fine-grained Patterns
3160
+
3161
+ Fig. 6 provides some visualization of the fine-grained patterns on CUB, AIR and CAR. The activation pattern of the proposed CMD is compared with three strong self-supervised learning methods, namely, DINO [12], MoCo v2 [11] and EsViT [21]. The proposed CMD is more effective to highlight the key local patterns that discern each fine-grained category from the others. Notably, the quality of the activation pattern can be quite close to existing fully-supervised FGVC methods [22, 23, 24, 25, 26].
3162
+
3163
+ V-H2On Activated Instances
3164
+
3165
+ Fig. 7 shows the activated instances by the proposed CMD on both image-level and region-level crops, both of which are formulated as bag. These instances are located on the regions of the objects that help discriminate them from other fine-grained categories.
3166
+
3167
+ V-H3On Failure Cases
3168
+
3169
+ Fig. 8 shows some failure cases of the proposed CMD, where the fine-grained patterns are not properly activated. It is observed that in such cases, the object is captured in conditions such as low contrast, varied viewpoint and occlusion. Nevertheless, such cases are common challenges to learn a robust representation for most vision tasks, not just for self-supervised FGVC or the proposed CMD.
3170
+
3171
+ VIConclusion
3172
+
3173
+ Self-supervised FGVC is an emerging topic. In this paper, we propose a simple, straight-forward but effective method Cross-level Multi-instance Distillation (CMD). It uses the multiple instance formulation to model the relation between image patches and fine-grained semantics. Both intra-level and inter-level multi-instance distillation are proposed to learn the informative patches that relate to the fine-grained semantics. Extensive experiments on three publicly available FGVC benchmarks show it significantly outperforms the state-of-the-art SSL and self-supervised FGVC methods.
3174
+
3175
+ The proposed CMD has the potential for large-scale pre-training on huge amount of unlabeled fine-grained images, but also has some limitation. Firstly, the proposed CMD is based on the assumption that the fine-grained patterns rest in several key instances. In some extreme noisy or adverse conditions when the key instance is corrupted, the self-supervised representation learned by CMD can be degraded. Secondly, the relation between bag and instance is computed by mean pooling based aggregation function, which does not consider the individual contribution of each instance.
3176
+
3177
+ Our future work is three-fold. First, we will improve the robustness on more challenging scenarios, where the key instances may be degraded by adverse conditions such as noise, low-light and conclusion. Secondly, we will improve the learning scheme so as to weight the contribution of each instance in determining the fine-grained bag representation. Thirdly, we will investigate the generalization ability to other semi- and weakly- supervised learning pipelines.
3178
+
3179
+ Acknowledgment
3180
+
3181
+ This work was supported by the National Natural Science Foundation of China under contracts No.U22B2011 and No.62325111. The authors would like to thank the editors and the anonymous reviewers, whose insightful suggestions and comments significantly improved our paper.
3182
+
3183
+ References
3184
+ [1]
3185
+
3186
+ J. Han, X. Yao, G. Cheng, X. Feng, and D. Xu, “P-cnn: Part-based convolutional neural networks for fine-grained visual categorization,” IEEE transactions on pattern analysis and machine intelligence, vol. 44, no. 2, pp. 579–590, 2022.
3187
+ [2]
3188
+
3189
+ H. Wang, J. Liao, T. Cheng, Z. Gao, H. Liu, B. Ren, X. Bai, and W. Liu, “Knowledge mining with scene text for fine-grained recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 4624–4633.
3190
+ [3]
3191
+
3192
+ W. Luo, X. Yang, X. Mo, Y. Lu, L. S. Davis, J. Li, J. Yang, and S.-N. Lim, “Cross-x learning for fine-grained visual categorization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 8242–8251.
3193
+ [4]
3194
+
3195
+ W. Ge, X. Lin, and Y. Yu, “Weakly supervised complementary parts models for fine-grained image classification from the bottom up,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3034–3043.
3196
+ [5]
3197
+
3198
+ Y. Song, N. Sebe, and W. Wang, “On the eigenvalues of global covariance pooling for fine-grained visual recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 3, pp. 3554–3566, 2023.
3199
+ [6]
3200
+
3201
+ X.-S. Wei, Y.-Z. Song, O. M. Aodha, J. Wu, Y. Peng, J. Tang, J. Yang, and S. Belongie, “Fine-grained image analysis with deep learning: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 12, pp. 8927–8948, 2022.
3202
+ [7]
3203
+
3204
+ Q. Bi, S. You, and T. Gevers, “Interactive learning of intrinsic and extrinsic properties for all-day semantic segmentation,” IEEE Transactions on Image Processing, 2023.
3205
+ [8]
3206
+
3207
+ J. Pan, Q. Bi, Y. Yang, P. Zhu, and C. Bian, “Label-efficient hybrid-supervised learning for medical image segmentation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 2, 2022, pp. 2026–2034.
3208
+ [9]
3209
+
3210
+ T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning, 2020, pp. 1597–1607.
3211
+ [10]
3212
+
3213
+ X. Chen and K. He, “Exploring simple siamese representation learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 15 750–15 758.
3214
+ [11]
3215
+
3216
+ X. Chen, H. Fan, R. Girshick, and K. He, “Improved baselines with momentum contrastive learning,” arXiv preprint arXiv:2003.04297.
3217
+ [12]
3218
+
3219
+ M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging properties in self-supervised vision transformers,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 9650–9660.
3220
+ [13]
3221
+
3222
+ E. Cole, X. Yang, K. Wilber, O. Mac Aodha, and S. Belongie, “When does contrastive visual representation learning work?” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 14 755–14 764.
3223
+ [14]
3224
+
3225
+ Y. Shu, A. van den Hengel, and L. Liu, “Learning common rationale to improve self-supervised representation for fine-grained visual recognition problems,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 11 392–11 401.
3226
+ [15]
3227
+
3228
+ X.-S. Wei, H.-J. Ye, X. Mu, J. Wu, C. Shen, and Z.-H. Zhou, “Multi-instance learning with emerging novel class,” IEEE Transactions on Knowledge and Data Engineering, vol. 33, no. 5, pp. 2109–2120, 2019.
3229
+ [16]
3230
+
3231
+ M. Ilse, J. Tomczak, and M. Welling, “Attention-based deep multiple instance learning,” in International conference on machine learning, 2018, pp. 2127–2136.
3232
+ [17]
3233
+
3234
+ X. Wang, Y. Yan, P. Tang, X. Bai, and W. Liu, “Revisiting multiple instance neural networks,” Pattern Recognition, vol. 74, pp. 15–24, 2018.
3235
+ [18]
3236
+
3237
+ X. Wang, B. Wang, X. Bai, W. Liu, and Z. Tu, “Max-margin multiple-instance dictionary learning,” in International conference on machine learning, 2013, pp. 846–854.
3238
+ [19]
3239
+
3240
+ X. Wang, Z. Zhu, C. Yao, and X. Bai, “Relaxed multiple-instance svm with application to object discovery,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1224–1232.
3241
+ [20]
3242
+
3243
+ P. Tang, X. Wang, X. Bai, and W. Liu, “Multiple instance detection network with online instance classifier refinement,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2843–2851.
3244
+ [21]
3245
+
3246
+ C. Li, J. Yang, P. Zhang, M. Gao, B. Xiao, X. Dai, L. Yuan, and J. Gao, “Efficient self-supervised vision transformers for representation learning,” in International Conference on Learning Representations, 2022.
3247
+ [22]
3248
+
3249
+ L. Zhang, S. Huang, W. Liu, and D. Tao, “Learning a mixture of granularity-specific experts for fine-grained categorization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 8331–8340.
3250
+ [23]
3251
+
3252
+ H. Touvron, A. Vedaldi, M. Douze, and H. Jégou, “Fixing the train-test resolution discrepancy,” in Advances in neural information processing systems, vol. 32, 2019.
3253
+ [24]
3254
+
3255
+ R. Ji, L. Wen, L. Zhang, D. Du, Y. Wu, C. Zhao, X. Liu, and F. Huang, “Attention convolutional binary neural tree for fine-grained visual categorization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10 468–10 477.
3256
+ [25]
3257
+
3258
+ R. Du, D. Chang, A. K. Bhunia, J. Xie, Z. Ma, Y.-Z. Song, and J. Guo, “Fine-grained visual classification via progressive multi-granularity training of jigsaw patches,” in European Conference on Computer Vision, 2020, pp. 153–168.
3259
+ [26]
3260
+
3261
+ Y. Zhao, J. Li, X. Chen, and Y. Tian, “Part-guided relational transformers for fine-grained visual recognition,” IEEE Transactions on Image Processing, vol. 30, pp. 9470–9481, 2021.
3262
+ [27]
3263
+
3264
+ A. Dubey, O. Gupta, P. Guo, R. Raskar, R. Farrell, and N. Naik, “Pairwise confusion for fine-grained visual classification,” in European conference on computer vision, 2018, pp. 70–86.
3265
+ [28]
3266
+
3267
+ D. Chang, Y. Tong, R. Du, T. Hospedales, Y.-Z. Song, and Z. Ma, “An erudite fine-grained visual classification model,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 7268–7277.
3268
+ [29]
3269
+
3270
+ Y. Hu, X. Jin, Y. Zhang, H. Hong, J. Zhang, Y. He, and H. Xue, “Rams-trans: Recurrent attention multi-scale transformer for fine-grained image recognition,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 4239–4248.
3271
+ [30]
3272
+
3273
+ J. He, J.-N. Chen, S. Liu, A. Kortylewski, C. Yang, Y. Bai, C. Wang, and A. Yuille, “Transfg: A transformer architecture for fine-grained recognition,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2022, pp. 2026–2034.
3274
+ [31]
3275
+
3276
+ Y. Gao, X. Han, X. Wang, W. Huang, and M. Scott, “Channel interaction networks for fine-grained image categorization,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 10 818–10 825.
3277
+ [32]
3278
+
3279
+ P. Zhuang, Y. Wang, and Y. Qiao, “Learning attentive pairwise interaction for fine-grained classification,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 13 130–13 137.
3280
+ [33]
3281
+
3282
+ A. Behera, Z. Wharton, P. Hewage, and A. Bera, “Context-aware attentional pooling (cap) for fine-grained visual classification,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, pp. 929–937.
3283
+ [34]
3284
+
3285
+ Y. Rao, G. Chen, J. Lu, and J. Zhou, “Counterfactual attention learning for fine-grained visual categorization and re-identification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1025–1034.
3286
+ [35]
3287
+
3288
+ H. Zhu, W. Ke, D. Li, J. Liu, L. Tian, and Y. Shan, “Dual cross-attention learning for fine-grained visual categorization and object re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 4692–4702.
3289
+ [36]
3290
+
3291
+ X.-S. Wei, Y. Shen, X. Sun, H.-J. Ye, and J. Yang, “A2-net: Learning attribute-aware hash codes for large-scale fine-grained image retrieval,” Advances in Neural Information Processing Systems, vol. 34, pp. 5720–5730, 2021.
3292
+ [37]
3293
+
3294
+ Y. Shen, X. Sun, X.-S. Wei, Q.-Y. Jiang, and J. Yang, “Semicon: A learning-to-hash solution for large-scale fine-grained image retrieval,” in European Conference on Computer Vision, 2022, pp. 531–548.
3295
+ [38]
3296
+
3297
+ L. Yang, X. Li, R. Song, B. Zhao, J. Tao, S. Zhou, J. Liang, and J. Yang, “Dynamic mlp for fine-grained image classification by leveraging geographical and temporal information,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 945–10 954.
3298
+ [39]
3299
+
3300
+ X. Yang, Y. Wang, K. Chen, Y. Xu, and Y. Tian, “Fine-grained object classification via self-supervised pose alignment,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7399–7408.
3301
+ [40]
3302
+
3303
+ Z. Fang, X. Jiang, H. Tang, and Z. Li, “Learning contrastive self-distillation for ultra-fine-grained visual categorization targeting limited samples,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 34, no. 8, pp. 7135–7148, 2024.
3304
+ [41]
3305
+
3306
+ R. Ji, J. Li, and L. Zhang, “Siamese self-supervised learning for fine-grained visual classification,” Computer Vision and Image Understanding, vol. 229, p. 103658, 2023.
3307
+ [42]
3308
+
3309
+ S. Rastegar, H. Doughty, and C. Snoek, “Learn to categorize or categorize to learn? self-coding for generalized category discovery,” Advances in Neural Information Processing Systems, vol. 36, 2024.
3310
+ [43]
3311
+
3312
+ J.-B. Grill, F. Strub, F. Altché, C. Tallec, P. Richemond, E. Buchatskaya, C. Doersch, B. Avila Pires, Z. Guo, M. Gheshlaghi Azar et al., “Bootstrap your own latent-a new approach to self-supervised learning,” Advances in neural information processing systems, vol. 33, pp. 21 271–21 284, 2020.
3313
+ [44]
3314
+
3315
+ J. Zbontar, L. Jing, I. Misra, Y. LeCun, and S. Deny, “Barlow twins: Self-supervised learning via redundancy reduction,” in International Conference on Machine Learning, 2021, pp. 12 310–12 320.
3316
+ [45]
3317
+
3318
+ A. Bardes, J. Ponce, and Y. Lecun, “Vicreg: Variance-invariance-covariance regularization for self-supervised learning,” in International Conference on Learning Representations, 2022.
3319
+ [46]
3320
+
3321
+ I. Ben-Shaul, R. Shwartz-Ziv, T. Galanti, S. Dekel, and Y. LeCun, “Reverse engineering self-supervised learning,” Advances in Neural Information Processing Systems, vol. 36, pp. 58 324–58 345, 2023.
3322
+ [47]
3323
+
3324
+ M. Assran, Q. Duval, I. Misra, P. Bojanowski, P. Vincent, M. Rabbat, Y. LeCun, and N. Ballas, “Self-supervised learning from images with a joint-embedding predictive architecture,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 15 619–15 629.
3325
+ [48]
3326
+
3327
+ Z. Feng and S. Zhang, “Evolved part masking for self-supervised learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 10 386–10 395.
3328
+ [49]
3329
+
3330
+ S. Kim, S. Bae, and S.-Y. Yun, “Coreset sampling from open-set for fine-grained self-supervised learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 7537–7547.
3331
+ [50]
3332
+
3333
+ F. Hu, C. Zhang, J. Guo, X.-S. Wei, L. Zhao, A. Xu, and L. Gao, “An asymmetric augmented self-supervised learning method for unsupervised fine-grained image hashing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 17 648–17 657.
3334
+ [51]
3335
+
3336
+ Y. Shu, B. Yu, H. Xu, and L. Liu, “Improving fine-grained visual recognition in low data regimes via self-boosting attention mechanism,” in European Conference of Computer Vision, 2022, pp. 449–465.
3337
+ [52]
3338
+
3339
+ N. Zhao, Z. Wu, R. W. Lau, and S. Lin, “Distilling localization for self-supervised representation learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 12, 2021, pp. 10 990–10 998.
3340
+ [53]
3341
+
3342
+ D. Wu, S. Li, Z. Zang, K. Wang, L. Shang, B. Sun, H. Li, and S. Z. Li, “Align yourself: Self-supervised pre-training for fine-grained recognition via saliency alignment,” arXiv preprint arXiv:2106.15788, 2021.
3343
+ [54]
3344
+
3345
+ L. Huang, S. You, M. Zheng, F. Wang, C. Qian, and T. Yamasaki, “Learning where to learn in cross-view self-supervised learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 14 451–14 460.
3346
+ [55]
3347
+
3348
+ X. Peng, K. Wang, Z. Zhu, M. Wang, and Y. You, “Crafting better contrastive views for siamese representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 031–16 040.
3349
+ [56]
3350
+
3351
+ Y. Yan, X. Wang, X. Guo, J. Fang, W. Liu, and J. Huang, “Deep multi-instance learning with dynamic pooling,” in Asian Conference on Machine Learning, 2018, pp. 662–677.
3352
+ [57]
3353
+
3354
+ Q. Bi, K. Qin, H. Zhang, and G.-S. Xia, “Local semantic enhanced convnet for aerial scene recognition,” IEEE Transactions on Image Processing, vol. 30, pp. 6498–6511, 2021.
3355
+ [58]
3356
+
3357
+ S. Yu, K. Ma, Q. Bi, C. Bian, M. Ning, N. He, Y. Li, H. Liu, and Y. Zheng, “Mil-vt: Multiple instance learning enhanced vision transformer for fundus image classification,” in Medical Image Computing and Computer Assisted Intervention, 2021, pp. 45–54.
3358
+ [59]
3359
+
3360
+ X. Wang, Y. Yan, P. Tang, W. Liu, and X. Guo, “Bag similarity network for deep multi-instance learning,” Information Sciences, vol. 504, pp. 578–588, 2019.
3361
+ [60]
3362
+
3363
+ R. Zhang, Q. Zhang, Y. Liu, H. Xin, Y. Liu, and X. Wang, “Multi-level multiple instance learning with transformer for whole slide image classification,” arXiv preprint arXiv:2306.05029, 2023.
3364
+ [61]
3365
+
3366
+ Q. Bi, K. Qin, Z. Li, H. Zhang, K. Xu, and G.-S. Xia, “A multiple-instance densely-connected convnet for aerial scene classification,” IEEE Transactions on Image Processing, vol. 29, pp. 4911–4926, 2020.
3367
+ [62]
3368
+
3369
+ Q. Bi, B. Zhou, K. Qin, Q. Ye, and G.-S. Xia, “All grains, one scheme (agos): Learning multigrain instance representation for aerial scene classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–17, 2022.
3370
+ [63]
3371
+
3372
+ P. Tang, X. Wang, Z. Huang, X. Bai, and W. Liu, “Deep patch learning for weakly supervised object classification and discovery,” Pattern Recognition, vol. 71, pp. 446–459, 2017.
3373
+ [64]
3374
+
3375
+ T. Cheng, X. Wang, S. Chen, W. Zhang, Q. Zhang, C. Huang, Z. Zhang, and W. Liu, “Sparse instance activation for real-time instance segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 4433–4442.
3376
+ [65]
3377
+
3378
+ M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola, “Deep sets,” Advances in neural information processing systems, vol. 30, 2017.
3379
+ [66]
3380
+
3381
+ P. Tang, X. Wang, B. Feng, and W. Liu, “Learning multi-instance deep discriminative patterns for image classification,” IEEE transactions on image processing, vol. 26, no. 7, pp. 3385–3396, 2016.
3382
+ [67]
3383
+
3384
+ Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10 012–10 022.
3385
+ [68]
3386
+
3387
+ P. Hall, “On kullback-leibler loss and density estimation,” The Annals of Statistics, pp. 1491–1519, 1987.
3388
+ [69]
3389
+
3390
+ K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, “Masked autoencoders are scalable vision learners,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 000–16 009.
3391
+ [70]
3392
+
3393
+ K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
3394
+ [71]
3395
+
3396
+ H. Wu, B. Xiao, N. Codella, M. Liu, X. Dai, L. Yuan, and L. Zhang, “Cvt: Introducing convolutions to vision transformers,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 22–31.
3397
+ [72]
3398
+
3399
+ P. Zhang, X. Dai, J. Yang, B. Xiao, L. Yuan, L. Zhang, and J. Gao, “Multi-scale vision longformer: A new vision transformer for high-resolution image encoding,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 2998–3008.
3400
+ [73]
3401
+
3402
+ E. Fini, P. Astolfi, K. Alahari, X. Alameda-Pineda, J. Mairal, M. Nabi, and E. Ricci, “Semi-supervised learning made simple with self-supervised clustering,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2023, pp. 3187–3197.
3403
+ [74]
3404
+
3405
+ H. Sun, X. He, J. Xu, and Y. Peng, “Sim-ofe: Structure information mining and object-aware feature enhancement for fine-grained visual categorization,” IEEE Transactions on Image Processing, vol. 33, pp. 5312–5326, 2024.
3406
+ [75]
3407
+
3408
+ Y.-C. Hsu, C.-Y. Hong, M.-S. Lee, D. Geiger, and T.-L. Liu, “Abc-norm regularization for fine-grained and long-tailed image classification,” IEEE Transactions on Image Processing, vol. 32, pp. 3885–3896, 2023.
3409
+ [76]
3410
+
3411
+ Y. Song, N. Sebe, and W. Wang, “Fast differentiable matrix square root and inverse square root,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 6, pp. 7367–7380, 2023.
3412
+ [77]
3413
+
3414
+ Z. Tang, H. Yang, and C. Y.-C. Chen, “Weakly supervised posture mining for fine-grained classification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 23 735–23 744.
3415
+ [78]
3416
+
3417
+ L. Zhu, T. Chen, J. Yin, S. See, and J. Liu, “Learning gabor texture features for fine-grained recognition,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 1621–1631.
3418
+ [79]
3419
+
3420
+ X. Jiang, H. Tang, J. Gao, X. Du, S. He, and Z. Li, “Delving into multimodal prompting for fine-grained visual classification,” in Proceedings of the AAAI conference on artificial intelligence, vol. 38, no. 3, 2024, pp. 2570–2578.
3421
+ Report Issue
3422
+ Report Issue for Selection
3423
+ Generated by L A T E xml
3424
+ Instructions for reporting errors
3425
+
3426
+ We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
3427
+
3428
+ Click the "Report Issue" button.
3429
+ Open a report feedback form via keyboard, use "Ctrl + ?".
3430
+ Make a text selection and click the "Report Issue for Selection" button near your cursor.
3431
+ You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
3432
+
3433
+ Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
3434
+
3435
+ Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.