Add 1 files
Browse files- 2203/2203.11593.md +302 -0
2203/2203.11593.md
ADDED
|
@@ -0,0 +1,302 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2203.11593
|
| 4 |
+
|
| 5 |
+
Published Time: Wed, 01 May 2024 13:59:59 GMT
|
| 6 |
+
|
| 7 |
+
Markdown Content:
|
| 8 |
+
Junuk Jung, Seonhoon Lee, Heung-Seon Oh 1 1 1 corresponding author, Yongjun Park, Joochan Park, Sungbin Son
|
| 9 |
+
School of Computer Science and Engineering
|
| 10 |
+
|
| 11 |
+
Korea University of Technology and Education (KOREATECH)
|
| 12 |
+
|
| 13 |
+
{rnans33, karma1002, ohhs, qkr2938, green669, sbson0621}@koreatech.ac.kr
|
| 14 |
+
|
| 15 |
+
###### Abstract
|
| 16 |
+
|
| 17 |
+
The goal of face recognition (FR) can be viewed as a pair similarity optimization problem, maximizing a similarity set ๐ฎ p superscript ๐ฎ ๐\mathcal{S}^{p}caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT over positive pairs, while minimizing similarity set ๐ฎ n superscript ๐ฎ ๐\mathcal{S}^{n}caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT over negative pairs. Ideally, it is expected that FR models form a well-discriminative feature space (WDFS) that satisfies inf ๐ฎ p>sup ๐ฎ n infimum superscript ๐ฎ ๐ supremum superscript ๐ฎ ๐\inf{\mathcal{S}^{p}}>\sup{\mathcal{S}^{n}}roman_inf caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT > roman_sup caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. With regard to WDFS, the existing deep feature learning paradigms (i.e., metric and classification losses) can be expressed as a unified perspective on different pair generation (PG) strategies. Unfortunately, in the metric loss (ML), it is infeasible to generate negative pairs taking all classes into account in each iteration because of the limited mini-batch size. In contrast, in classification loss (CL), it is difficult to generate extremely hard negative pairs owing to the convergence of the class weight vectors to their center. This leads to a mismatch between the two similarity distributions of the sampled pairs and all negative pairs. Thus, this paper proposes a unified negative pair generation (UNPG) by combining two PG strategies (i.e., MLPG and CLPG) from a unified perspective to alleviate the mismatch. UNPG introduces useful information about negative pairs using MLPG to overcome the CLPG deficiency. Moreover, it includes filtering the similarities of noisy negative pairs to guarantee reliable convergence and improved performance. Exhaustive experiments show the superiority of UNPG by achieving state-of-the-art performance across recent loss functions on public benchmark datasets. Our code and pretrained models are publicly available 2 2 2 https://github.com/Jung-Jun-Uk/UNPG.git.
|
| 18 |
+
|
| 19 |
+
![Image 1: [Uncaptioned image]](https://arxiv.org/html/2203.11593v2/extracted/2203.11593v2/images/geo_intro2.png)
|
| 20 |
+
|
| 21 |
+
Figure 1: Similarity distributions viewed from pair generation perspective for approximating WDFS. The bottom line presents similarity distributions in feature space after sufficiently learning in their own ways with the top line. ๐ฎ p superscript ๐ฎ ๐\mathcal{S}^{p}caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ n superscript ๐ฎ ๐\mathcal{S}^{n}caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT are positive and negative similarity sets and ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT are subsets of ๐ฎ p superscript ๐ฎ ๐\mathcal{S}^{p}caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ n superscript ๐ฎ ๐\mathcal{S}^{n}caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, respectively. (a) The ideal similarity sets satisfying inf ๐ฎ p>sup ๐ฎ n infimum superscript ๐ฎ ๐ supremum superscript ๐ฎ ๐\inf{\mathcal{S}^{p}}>\sup{\mathcal{S}^{n}}roman_inf caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT > roman_sup caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT after learning with ๐ฎ p superscript ๐ฎ ๐\mathcal{S}^{p}caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ n superscript ๐ฎ ๐\mathcal{S}^{n}caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. ฮธ mโขaโขx p subscript superscript ๐ ๐ ๐ ๐ ๐ฅ\theta^{p}_{max}italic_ฮธ start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT and ฮธ mโขiโขn n subscript superscript ๐ ๐ ๐ ๐ ๐\theta^{n}_{min}italic_ฮธ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT are the max and min angles among positive and negative pairs. (b) Using a vanilla loss, no overlap between ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT results in an overlap between ๐ฎ p superscript ๐ฎ ๐\mathcal{S}^{p}caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ n superscript ๐ฎ ๐\mathcal{S}^{n}caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. (c) Using a marginal loss, an overlap between ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT by shifting ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT reduces an overlap after learning. (d) Using more negative pairs, an overlap between ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT by shifting ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and enlarging ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT significantly reduces the overlap after learning.
|
| 22 |
+
|
| 23 |
+
1 Introduction
|
| 24 |
+
--------------
|
| 25 |
+
|
| 26 |
+
The goal of face recognition (FR) can be viewed as a pair similarity optimization problem that maximizes a similarity set ๐ฎ p superscript ๐ฎ ๐\mathcal{S}^{p}caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT over positive pairs while minimizing a similarity set ๐ฎ n superscript ๐ฎ ๐\mathcal{S}^{n}caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT over negative pairs. Regardless of FR tasks such as face verification (1:1) and face identification (1:N), it is expected that FR models form a well-discriminative feature space (WDFS) that satisfies inf ๐ฎ p>sup ๐ฎ n infimum superscript ๐ฎ ๐ supremum superscript ๐ฎ ๐\inf{\mathcal{S}^{p}}>\sup{\mathcal{S}^{n}}roman_inf caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT > roman_sup caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT as shown in Fig. [1](https://arxiv.org/html/2203.11593v2#S0.F1 "Figure 1 โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") (a). To this end, previous research advances pair similarity optimization[[27](https://arxiv.org/html/2203.11593v2#bib.bib27), [35](https://arxiv.org/html/2203.11593v2#bib.bib35), [39](https://arxiv.org/html/2203.11593v2#bib.bib39), [14](https://arxiv.org/html/2203.11593v2#bib.bib14), [4](https://arxiv.org/html/2203.11593v2#bib.bib4), [34](https://arxiv.org/html/2203.11593v2#bib.bib34), [18](https://arxiv.org/html/2203.11593v2#bib.bib18)] by enhancing intra-class compactness and inter-class dispersion.
|
| 27 |
+
|
| 28 |
+
In deep feature learning paradigms for pair similarity optimization, loss functions in FR can be categorized based on two approaches: metric loss (ML; e.g., triplet loss[[23](https://arxiv.org/html/2203.11593v2#bib.bib23), [8](https://arxiv.org/html/2203.11593v2#bib.bib8)] and N-pair loss[[26](https://arxiv.org/html/2203.11593v2#bib.bib26)]) and classification loss (CL; e.g., softmax loss[[1](https://arxiv.org/html/2203.11593v2#bib.bib1), [21](https://arxiv.org/html/2203.11593v2#bib.bib21), [30](https://arxiv.org/html/2203.11593v2#bib.bib30)]). The former directly performs the optimization with a pair of deep feature vectors using a pair-wise label whereas the latter performs indirectly with a pair of deep feature and class weight vectors using a class-level label. Recently, in circle loss[[27](https://arxiv.org/html/2203.11593v2#bib.bib27)], two different approaches were expressed as a unified loss function since their intent and behaviors pursued the same objective of maximizing a similarity set ๐ฎ p superscript ๐ฎ ๐\mathcal{S}^{p}caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT over positive pairs, while minimizing it over negative pairs. We decomposed the unified loss function into pair generation (PG) and similarity computation (SC) without loss of generality. While SC focuses on computing the similarity between two samples in a pair, PG focuses on generating a pair using vectors of deep features or class weights. In the unified loss function, the only difference between ML and CL is PG, because various methods in SC can be applied to both ML and CL in the same manner. Consequently, the core of FR research from a unified perspective is the generation of informative pairs, i.e., PG. This is crucial because only a limited number of pairs are trainable in each iteration owing to the large computational costs incurred. Based on the assumption that pairs sampled from mini-batches can represent the entire feature space, the existing methods decrease the loss to a pair as it approaches WDFS, whereas they increase the loss in the opposite case.
|
| 29 |
+
|
| 30 |
+
We observed the reason why FR models trained sufficiently fail to approach WDFS. This stems from the mismatch of similarity distributions between the sampled pairs and all pairs. Fig. [1](https://arxiv.org/html/2203.11593v2#S0.F1 "Figure 1 โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") (b) shows an example of two similarity sets ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT of positive and negative pairs, respectively, sampled from mini-batches. A FR model is rarely trainable with ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT because they are well-separated with almost no overlap. To deal with this problem, a line of research[[27](https://arxiv.org/html/2203.11593v2#bib.bib27), [35](https://arxiv.org/html/2203.11593v2#bib.bib35), [39](https://arxiv.org/html/2203.11593v2#bib.bib39), [14](https://arxiv.org/html/2203.11593v2#bib.bib14), [4](https://arxiv.org/html/2203.11593v2#bib.bib4), [34](https://arxiv.org/html/2203.11593v2#bib.bib34), [18](https://arxiv.org/html/2203.11593v2#bib.bib18)] devises marginal loss functions to reduce the gap by shifting ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT, as shown in Fig. [1](https://arxiv.org/html/2203.11593v2#S0.F1 "Figure 1 โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") (c). In general, marginal CL functions show better performance than ML functions on large-scale datasets[[5](https://arxiv.org/html/2203.11593v2#bib.bib5)]. However, there still exists a mismatch between the sampled pairs and all pairs because it is difficult to obtain too-hard negative pairs.
|
| 31 |
+
|
| 32 |
+
This paper proposes unified negative pair generation (UNPG) by combining two PG strategies (i.e., MLPG and CLPG) from a unified perspective to alleviate the mismatch. Moreover, it includes filtering noise-negative pairs, such as too-easy/hard negative pairs, in order to guarantee reliable convergence and improve performance. Consequently, UNPG helps approach WDFS, as shown in Fig. [1](https://arxiv.org/html/2203.11593v2#S0.F1 "Figure 1 โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") (d). Through experiments, we demonstrate the superiority of UNPG by achieving state-of-the-art performance using recent loss functions equipped with UNPG on public benchmark datasets (IJB-B[[37](https://arxiv.org/html/2203.11593v2#bib.bib37)], IJB-C[[17](https://arxiv.org/html/2203.11593v2#bib.bib17)], MegaFace[[12](https://arxiv.org/html/2203.11593v2#bib.bib12)], LFW[[10](https://arxiv.org/html/2203.11593v2#bib.bib10)], CFPFP[[24](https://arxiv.org/html/2203.11593v2#bib.bib24)], AgeDB-30[[19](https://arxiv.org/html/2203.11593v2#bib.bib19)], CALFW[[41](https://arxiv.org/html/2203.11593v2#bib.bib41)], and K-FACE[[2](https://arxiv.org/html/2203.11593v2#bib.bib2)]) and deliver an in-depth analysis of the reasons behind UNPG. The contributions of this study are summarized as follows:
|
| 33 |
+
|
| 34 |
+
* โขWe propose unified negative pair generation (UNPG), which alleviates the mismatch between the distributions of the sampled pairs and entire negative pairs. UNPG is simple and easy to adapt to existing CL functions with only a minor modification.
|
| 35 |
+
* โขWe demonstrate the superiority of UNPG by achieving state-of-the-art performance using recent loss functions equipped with UNPG on public benchmark datasets for face verification and identification.
|
| 36 |
+
|
| 37 |
+
2 Related Works
|
| 38 |
+
---------------
|
| 39 |
+
|
| 40 |
+
FR is one of the most promising computer-vision tasks. In recent times, the combination of the following three factors has contributed to the rapid growth of this technology: 1) introduction to large-scale face datasets[[5](https://arxiv.org/html/2203.11593v2#bib.bib5), [38](https://arxiv.org/html/2203.11593v2#bib.bib38), [28](https://arxiv.org/html/2203.11593v2#bib.bib28)], 2) development of effective backbone models[[13](https://arxiv.org/html/2203.11593v2#bib.bib13), [7](https://arxiv.org/html/2203.11593v2#bib.bib7), [25](https://arxiv.org/html/2203.11593v2#bib.bib25), [9](https://arxiv.org/html/2203.11593v2#bib.bib9), [29](https://arxiv.org/html/2203.11593v2#bib.bib29)], 3) novel loss functions[[34](https://arxiv.org/html/2203.11593v2#bib.bib34), [4](https://arxiv.org/html/2203.11593v2#bib.bib4), [18](https://arxiv.org/html/2203.11593v2#bib.bib18)]. Among them, loss functions have been actively developed and can be categorized into metric and classification losses.
|
| 41 |
+
|
| 42 |
+
Metric Loss. Early direct optimization methods include contrastive loss[[3](https://arxiv.org/html/2203.11593v2#bib.bib3), [6](https://arxiv.org/html/2203.11593v2#bib.bib6)] and triplet loss[[23](https://arxiv.org/html/2203.11593v2#bib.bib23), [8](https://arxiv.org/html/2203.11593v2#bib.bib8)], which use the similarity between pairs or triplets in the feature space. They try to make positive samples close and push negative samples far away, but often suffer from slow convergence and poor local optima because they only learn 1:1 relationships in positive and negative pairs. Thus, lifted-structure loss[[20](https://arxiv.org/html/2203.11593v2#bib.bib20)] and N-pair loss[[26](https://arxiv.org/html/2203.11593v2#bib.bib26)] were designed to address this issue. They build massive negative samples and one positive sample based on the same anchor point and learn their relationship simultaneously. Subsequently, other methods have been developed to create more informative pairs. Multi-similarity loss[[35](https://arxiv.org/html/2203.11593v2#bib.bib35)] classifies existing studies into three types of similarities and devises pair mining and pair weighting methods that satisfy them simultaneously. Tuplet-margin loss[[39](https://arxiv.org/html/2203.11593v2#bib.bib39)] provides a slack margin to prevent overfitting from hard triplets. Despite these efforts, ML still faces a problem: The negative pairs generated by each iteration cannot represent all identities because FR datasets[[5](https://arxiv.org/html/2203.11593v2#bib.bib5), [38](https://arxiv.org/html/2203.11593v2#bib.bib38), [28](https://arxiv.org/html/2203.11593v2#bib.bib28)] usually have more classes than a mini-batch size.
|
| 43 |
+
|
| 44 |
+
Classification Loss. Early indirect optimization methods include softmax loss[[1](https://arxiv.org/html/2203.11593v2#bib.bib1), [21](https://arxiv.org/html/2203.11593v2#bib.bib21), [30](https://arxiv.org/html/2203.11593v2#bib.bib30)], which uses the similarity between the deep feature and class weight vectors. Softmax loss has been widely applied in classification problems, but it is not appropriate for FR because testing is done by similarity comparison, not classification. Hence, two methods are being investigated to modify the softmax logit to form a feature space suitable for FR. The first is the normalization of the deep feature and class weight vectors[[22](https://arxiv.org/html/2203.11593v2#bib.bib22), [34](https://arxiv.org/html/2203.11593v2#bib.bib34), [32](https://arxiv.org/html/2203.11593v2#bib.bib32)] to reduce the gap between the training and test phases mapped to the cosine similarity space. This is motivated by the interpretation of the feature space of studies such as center-loss[[36](https://arxiv.org/html/2203.11593v2#bib.bib36)], L-softmax[[15](https://arxiv.org/html/2203.11593v2#bib.bib15)], and NormFace[[33](https://arxiv.org/html/2203.11593v2#bib.bib33)]. The second is a margin assignment technique[[14](https://arxiv.org/html/2203.11593v2#bib.bib14), [34](https://arxiv.org/html/2203.11593v2#bib.bib34), [32](https://arxiv.org/html/2203.11593v2#bib.bib32), [4](https://arxiv.org/html/2203.11593v2#bib.bib4), [18](https://arxiv.org/html/2203.11593v2#bib.bib18)], which is performed in various ways to enhance intra-class compactness and inter-class dispersion. CosFace[[34](https://arxiv.org/html/2203.11593v2#bib.bib34)] and ArcFace[[4](https://arxiv.org/html/2203.11593v2#bib.bib4)], which are typical margin-based loss functions, add external and internal margins to cosine angles, respectively. Recently, MagFace[[18](https://arxiv.org/html/2203.11593v2#bib.bib18)] introduced a new margin and regularizer technique within several constraints that assumed a positive correlation between magnitude and face quality and ensured convergence. This improved FR performance by creating discriminative features. However, in our interpretation, there is a problem that extremely hard negative similarities in the feature space cannot be expressed by the indirect optimization method alone.
|
| 45 |
+
|
| 46 |
+
Multi Objective Loss & Unified Loss. Multi-objective loss tried to combine two different losses with a mixture weight at the surface level. MixFace[[11](https://arxiv.org/html/2203.11593v2#bib.bib11)] attempted to combine the metric and classification losses (i.e., โ mโขiโขx=โ aโขrโขc+โ sโขnโpโขaโขiโขr subscript โ ๐ ๐ ๐ฅ subscript โ ๐ ๐ ๐ subscript โ ๐ ๐ ๐ ๐ ๐ ๐\mathcal{L}_{mix}=\mathcal{L}_{arc}+\mathcal{L}_{sn-pair}caligraphic_L start_POSTSUBSCRIPT italic_m italic_i italic_x end_POSTSUBSCRIPT = caligraphic_L start_POSTSUBSCRIPT italic_a italic_r italic_c end_POSTSUBSCRIPT + caligraphic_L start_POSTSUBSCRIPT italic_s italic_n - italic_p italic_a italic_i italic_r end_POSTSUBSCRIPT) with an analysis of their advantages and disadvantages. However, it is a mixture at the surface level and not a unified loss function. According to the Circle-loss [[27](https://arxiv.org/html/2203.11593v2#bib.bib27)], the two existing approaches (i.e., metric and classification losses) can be expressed as a unified loss function. It also adds independent weight factors to deal with ambiguous convergence but is limited in generating pairs (e.g., Circle-loss[[27](https://arxiv.org/html/2203.11593v2#bib.bib27)] used only MLPG).
|
| 47 |
+
|
| 48 |
+
3 Methodology
|
| 49 |
+
-------------
|
| 50 |
+
|
| 51 |
+
Unified Loss. According to [[27](https://arxiv.org/html/2203.11593v2#bib.bib27), [26](https://arxiv.org/html/2203.11593v2#bib.bib26)], the classification and metric losses can be expressed as a unified loss function (i.e., cross-entropy loss). Suppose that ๐ฎ^p={s i p|i=1,2,โฆ,K}superscript^๐ฎ ๐ conditional-set subscript superscript ๐ ๐ ๐ ๐ 1 2โฆ๐พ\mathcal{\hat{S}}^{p}=\{s^{p}_{i}|i=1,2,...,K\}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT = { italic_s start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | italic_i = 1 , 2 , โฆ , italic_K } and ๐ฎ^n={s j n|j=1,2,โฆ,L}superscript^๐ฎ ๐ conditional-set subscript superscript ๐ ๐ ๐ ๐ 1 2โฆ๐ฟ\mathcal{\hat{S}}^{n}=\{s^{n}_{j}|j=1,2,...,L\}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT = { italic_s start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | italic_j = 1 , 2 , โฆ , italic_L } are the similarity scores for K ๐พ K italic_K positive and L ๐ฟ L italic_L negative pairs, respectively. Then, the unified loss function is defined as:
|
| 52 |
+
|
| 53 |
+
โ uโขnโขi=1 Kโขโi=1 K โ i uโขnโขi,โ i uโขnโขi=โlogโกe ฮณโขs i p e ฮณโขs i p+โj=1 L e ฮณโขs j n=logโก[1+โj=1 L e ฮณโข(s j nโs i p)]formulae-sequence superscript โ ๐ข ๐ ๐ 1 ๐พ superscript subscript ๐ 1 ๐พ subscript superscript โ ๐ข ๐ ๐ ๐ subscript superscript โ ๐ข ๐ ๐ ๐ superscript ๐ ๐พ subscript superscript ๐ ๐ ๐ superscript ๐ ๐พ subscript superscript ๐ ๐ ๐ superscript subscript ๐ 1 ๐ฟ superscript ๐ ๐พ subscript superscript ๐ ๐ ๐ 1 superscript subscript ๐ 1 ๐ฟ superscript ๐ ๐พ subscript superscript ๐ ๐ ๐ subscript superscript ๐ ๐ ๐\begin{split}\mathcal{L}^{uni}&=\frac{1}{K}\sum_{i=1}^{K}\mathcal{L}^{uni}_{i}% ,\\ \mathcal{L}^{uni}_{i}&=-\log{\frac{e^{\gamma s^{p}_{i}}}{e^{\gamma s^{p}_{i}}+% \sum_{j=1}^{L}e^{\gamma s^{n}_{j}}}}=\log{[1+\sum_{j=1}^{L}e^{\gamma(s^{n}_{j}% -s^{p}_{i})}]}\end{split}start_ROW start_CELL caligraphic_L start_POSTSUPERSCRIPT italic_u italic_n italic_i end_POSTSUPERSCRIPT end_CELL start_CELL = divide start_ARG 1 end_ARG start_ARG italic_K end_ARG โ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT caligraphic_L start_POSTSUPERSCRIPT italic_u italic_n italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , end_CELL end_ROW start_ROW start_CELL caligraphic_L start_POSTSUPERSCRIPT italic_u italic_n italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_CELL start_CELL = - roman_log divide start_ARG italic_e start_POSTSUPERSCRIPT italic_ฮณ italic_s start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUPERSCRIPT end_ARG start_ARG italic_e start_POSTSUPERSCRIPT italic_ฮณ italic_s start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUPERSCRIPT + โ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_L end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT italic_ฮณ italic_s start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUPERSCRIPT end_ARG = roman_log [ 1 + โ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_L end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT italic_ฮณ ( italic_s start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - italic_s start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT ] end_CELL end_ROW(1)
|
| 54 |
+
|
| 55 |
+
where ฮณ ๐พ\gamma italic_ฮณ is a positive scale factor.
|
| 56 |
+
|
| 57 |
+
The only difference between the two losses is the method of computing ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. We break down this step into PG and SC to clearly explain our proposed method without loss of generality.
|
| 58 |
+
|
| 59 |
+
Pair Generation (PG). In a feature space, let us assume that ๐ i subscript ๐ ๐\bm{x}_{i}bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and ๐ j subscript ๐ ๐\bm{x}_{j}bold_italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT are i ๐ i italic_i-th and j ๐ j italic_j-th samples in N ๐ N italic_N-sized mini-batch and y i subscript ๐ฆ ๐ y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and y j subscript ๐ฆ ๐ y_{j}italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT are corresponding target classes, respectively. ๐ y j subscript ๐ subscript ๐ฆ ๐\bm{w}_{y_{j}}bold_italic_w start_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT is a class weight vector of j ๐ j italic_j-th class. Then, we generate positive and negative pair sets ๐ซ ๐ซ\mathcal{P}caligraphic_P and ๐ฉ ๐ฉ\mathcal{N}caligraphic_N for the metric (Eq. [2](https://arxiv.org/html/2203.11593v2#S3.E2 "In 3 Methodology โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition")) and classification (Eq. [3](https://arxiv.org/html/2203.11593v2#S3.E3 "In 3 Methodology โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition")) losses, respectively:
|
| 60 |
+
|
| 61 |
+
๐ซ mโขl={(๐ i,๐ j)|y i=y j}๐ฉ mโขl={(๐ i,๐ j)|y iโ y j}superscript ๐ซ ๐ ๐ conditional-set subscript ๐ ๐ subscript ๐ ๐ subscript ๐ฆ ๐ subscript ๐ฆ ๐ superscript ๐ฉ ๐ ๐ conditional-set subscript ๐ ๐ subscript ๐ ๐ subscript ๐ฆ ๐ subscript ๐ฆ ๐\begin{split}\mathcal{P}^{ml}&=\{(\bm{x}_{i},\bm{x}_{j})|y_{i}=y_{j}\}\\ \mathcal{N}^{ml}&=\{(\bm{x}_{i},\bm{x}_{j})|{y_{i}\neq y_{j}}\}\end{split}start_ROW start_CELL caligraphic_P start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT end_CELL start_CELL = { ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) | italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } end_CELL end_ROW start_ROW start_CELL caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT end_CELL start_CELL = { ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) | italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT โ italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } end_CELL end_ROW(2)
|
| 62 |
+
|
| 63 |
+
๐ซ i cโขl=(๐ i,๐ y i)๐ฉ i cโขl={(๐ i,๐ j)|jโ y i}subscript superscript ๐ซ ๐ ๐ ๐ subscript ๐ ๐ subscript ๐ subscript ๐ฆ ๐ subscript superscript ๐ฉ ๐ ๐ ๐ conditional-set subscript ๐ ๐ subscript ๐ ๐ ๐ subscript ๐ฆ ๐\begin{split}\mathcal{P}^{cl}_{i}&=({\bm{x}_{i},\bm{w}_{y_{i}}})\\ \mathcal{N}^{cl}_{i}&=\{(\bm{x}_{i},\bm{w}_{j})|j\neq{y}_{i}\}\end{split}start_ROW start_CELL caligraphic_P start_POSTSUPERSCRIPT italic_c italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_CELL start_CELL = ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_w start_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) end_CELL end_ROW start_ROW start_CELL caligraphic_N start_POSTSUPERSCRIPT italic_c italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_CELL start_CELL = { ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_w start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) | italic_j โ italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } end_CELL end_ROW(3)
|
| 64 |
+
|
| 65 |
+
In ML, a pair is composed of two samples (e.g., ๐ i subscript ๐ ๐\bm{x}_{i}bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and ๐ j subscript ๐ ๐\bm{x}_{j}bold_italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT) from a mini-batch, and is composed of a sample and a class weight vector (e.g., ๐ i subscript ๐ ๐\bm{x}_{i}bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and ๐ y i subscript ๐ subscript ๐ฆ ๐\bm{w}_{y_{i}}bold_italic_w start_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT) in CL. We denote MLPG and CLPG for PG of the metric and classification losses, respectively.
|
| 66 |
+
|
| 67 |
+
Similarity Computation (SC). We can compute the similarity sets ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT obtained from PG. The metric and classification losses employ the same similarity method for the same type of pair sets (i.e., positive sets ๐ซ mโขl superscript ๐ซ ๐ ๐\mathcal{P}^{ml}caligraphic_P start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT and ๐ซ cโขl superscript ๐ซ ๐ ๐\mathcal{P}^{cl}caligraphic_P start_POSTSUPERSCRIPT italic_c italic_l end_POSTSUPERSCRIPT and negative sets ๐ฉ mโขl superscript ๐ฉ ๐ ๐\mathcal{N}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT and ๐ฉ cโขl superscript ๐ฉ ๐ ๐\mathcal{N}^{cl}caligraphic_N start_POSTSUPERSCRIPT italic_c italic_l end_POSTSUPERSCRIPT). Recent research has focused on improving the cosine similarity using a margin. Let us define the angle between two vectors as ฮโข(๐,๐)=arccosโก(๐ Tโข๐/โฅ๐โฅโขโฅ๐โฅ)ฮ ๐ ๐ superscript ๐ ๐ ๐ delimited-โฅโฅ๐ delimited-โฅโฅ๐\Theta(\bm{a},\bm{b})=\arccos{(\bm{a}^{T}\bm{b}/\lVert\bm{a}\rVert\lVert\bm{b}% \rVert)}roman_ฮ ( bold_italic_a , bold_italic_b ) = roman_arccos ( bold_italic_a start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_b / โฅ bold_italic_a โฅ โฅ bold_italic_b โฅ ). Then, SN-pair[[11](https://arxiv.org/html/2203.11593v2#bib.bib11)] computes ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT for ๐ซ mโขl superscript ๐ซ ๐ ๐\mathcal{P}^{ml}caligraphic_P start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT and ๐ฉ mโขl superscript ๐ฉ ๐ ๐\mathcal{N}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT as:
|
| 68 |
+
|
| 69 |
+
๐ฎ^p={cosโกฮโข(๐ i,๐ j)|y i=y j}๐ฎ^n={cosโกฮโข(๐ i,๐ j)|y iโ y j}superscript^๐ฎ ๐ conditional-set ฮ subscript ๐ ๐ subscript ๐ ๐ subscript ๐ฆ ๐ subscript ๐ฆ ๐ superscript^๐ฎ ๐ conditional-set ฮ subscript ๐ ๐ subscript ๐ ๐ subscript ๐ฆ ๐ subscript ๐ฆ ๐\begin{split}\mathcal{\hat{S}}^{p}&=\{\cos{\Theta(\bm{x}_{i},\bm{x}_{j})}|y_{i% }=y_{j}\}\\ \mathcal{\hat{S}}^{n}&=\{\cos{\Theta(\bm{x}_{i},\bm{x}_{j})}|y_{i}\neq y_{j}\}% \end{split}start_ROW start_CELL over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT end_CELL start_CELL = { roman_cos roman_ฮ ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) | italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } end_CELL end_ROW start_ROW start_CELL over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT end_CELL start_CELL = { roman_cos roman_ฮ ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) | italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT โ italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } end_CELL end_ROW(4)
|
| 70 |
+
|
| 71 |
+
There is a line[[4](https://arxiv.org/html/2203.11593v2#bib.bib4), [18](https://arxiv.org/html/2203.11593v2#bib.bib18), [34](https://arxiv.org/html/2203.11593v2#bib.bib34)] of research that employs a margin in cosine similarity. In CosFace [33], margin m ๐ m italic_m is placed outside cosine for ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT. Thus, ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT are computed for ๐ซ i cโขl subscript superscript ๐ซ ๐ ๐ ๐\mathcal{P}^{cl}_{i}caligraphic_P start_POSTSUPERSCRIPT italic_c italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and ๐ฉ i cโขl subscript superscript ๐ฉ ๐ ๐ ๐\mathcal{N}^{cl}_{i}caligraphic_N start_POSTSUPERSCRIPT italic_c italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT as:
|
| 72 |
+
|
| 73 |
+
๐ฎ^i p={cosโกฮโข(๐ i,๐ y i)+m}๐ฎ^i n={cosโกฮโข(๐ i,๐ j)|jโ y i}subscript superscript^๐ฎ ๐ ๐ ฮ subscript ๐ ๐ subscript ๐ subscript ๐ฆ ๐ ๐ subscript superscript^๐ฎ ๐ ๐ conditional-set ฮ subscript ๐ ๐ subscript ๐ ๐ ๐ subscript ๐ฆ ๐\begin{split}\mathcal{\hat{S}}^{p}_{i}&=\{\cos{\Theta(\bm{x}_{i},\bm{w}_{y_{i}% })+m}\}\\ \mathcal{\hat{S}}^{n}_{i}&=\{\cos{\Theta(\bm{x}_{i},\bm{w}_{j})}|j\neq y_{i}\}% \end{split}start_ROW start_CELL over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_CELL start_CELL = { roman_cos roman_ฮ ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_w start_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) + italic_m } end_CELL end_ROW start_ROW start_CELL over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_CELL start_CELL = { roman_cos roman_ฮ ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_w start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) | italic_j โ italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } end_CELL end_ROW(5)
|
| 74 |
+
|
| 75 |
+
On the other hand, ArcFace[[4](https://arxiv.org/html/2203.11593v2#bib.bib4)] places margin m ๐ m italic_m inside cosine:
|
| 76 |
+
|
| 77 |
+
๐ฎ^i p={cosโก(ฮโข(๐ i,๐ y i)+m)}subscript superscript^๐ฎ ๐ ๐ ฮ subscript ๐ ๐ subscript ๐ subscript ๐ฆ ๐ ๐\mathcal{\hat{S}}^{p}_{i}=\{\cos{(\Theta(\bm{x}_{i},\bm{w}_{y_{i}})+m)}\}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = { roman_cos ( roman_ฮ ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_w start_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) + italic_m ) }(6)
|
| 78 |
+
|
| 79 |
+
In other research using margins such as SphereFace[[14](https://arxiv.org/html/2203.11593v2#bib.bib14)] and MagFace[[18](https://arxiv.org/html/2203.11593v2#bib.bib18)], ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT can be derived similarly without loss of generality.
|
| 80 |
+
|
| 81 |
+

|
| 82 |
+
|
| 83 |
+
Figure 2: Unified loss with UNPG.
|
| 84 |
+
|
| 85 |
+
Unified Negative Pair Generation (UNPG). We address the fact that PG is the only difference between metric and classification losses from a unified perspective. Previous studies[[35](https://arxiv.org/html/2203.11593v2#bib.bib35), [39](https://arxiv.org/html/2203.11593v2#bib.bib39), [14](https://arxiv.org/html/2203.11593v2#bib.bib14), [4](https://arxiv.org/html/2203.11593v2#bib.bib4), [34](https://arxiv.org/html/2203.11593v2#bib.bib34), [18](https://arxiv.org/html/2203.11593v2#bib.bib18)] attempted to reduce the gap between ๐ฎ p superscript ๐ฎ ๐\mathcal{S}^{p}caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT by devising various margin-based methods. Evidently, there is no concern about the gap between ๐ฎ n superscript ๐ฎ ๐\mathcal{S}^{n}caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT even though it is a critical component in computing a loss. There are several reasons that cause the gap between ๐ฎ n superscript ๐ฎ ๐\mathcal{S}^{n}caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. In ML, it is infeasible to generate negative pairs taking all classes into account in each iteration because of the limited mini-batch size. In CL, it is difficult to generate too-hard negative pairs owing to the convergence of the class weight vectors to their center. This paper proposes unified negative pair generation (UNPG) by combining MLPG and CLPG strategies from a unified perspective to alleviate the mismatches of (๐ฎ n,๐ฎ^n)superscript ๐ฎ ๐ superscript^๐ฎ ๐(\mathcal{S}^{n},\mathcal{\hat{S}}^{n})( caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT , over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ) and (๐ฎ p,๐ฎ^p)superscript ๐ฎ ๐ superscript^๐ฎ ๐(\mathcal{S}^{p},\mathcal{\hat{S}}^{p})( caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT , over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT ), together. UNPG introduces useful information about negative pairs using MLPG to overcome the CLPG deficiency. In UNPG, a negative pair set ๐ฉ i uโขnโขi subscript superscript ๐ฉ ๐ข ๐ ๐ ๐\mathcal{N}^{uni}_{i}caligraphic_N start_POSTSUPERSCRIPT italic_u italic_n italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and the corresponding similarity set ๐ฎ^i n subscript superscript^๐ฎ ๐ ๐\mathcal{\hat{S}}^{n}_{i}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are defined as:
|
| 86 |
+
|
| 87 |
+
๐ฉ i uโขnโขi=๐ฉ i cโขlโช๐ฉ mโขl ๐ฎ^i n={cosโกฮโข(๐ i,๐ j)|jโ y i}โช{cosโกฮโข(๐ i,๐ j)|y iโ y j}subscript superscript ๐ฉ ๐ข ๐ ๐ ๐ subscript superscript ๐ฉ ๐ ๐ ๐ superscript ๐ฉ ๐ ๐ subscript superscript^๐ฎ ๐ ๐ conditional-set ฮ subscript ๐ ๐ subscript ๐ ๐ ๐ subscript ๐ฆ ๐ conditional-set ฮ subscript ๐ ๐ subscript ๐ ๐ subscript ๐ฆ ๐ subscript ๐ฆ ๐\begin{gathered}\mathcal{N}^{uni}_{i}=\mathcal{N}^{cl}_{i}\cup\mathcal{N}^{ml}% \\ \mathcal{\hat{S}}^{n}_{i}=\{\cos{\Theta(\bm{x}_{i},\bm{w}_{j})}|j\neq y_{i}\}% \cup\{\cos{\Theta(\bm{x}_{i},\bm{x}_{j})}|y_{i}\neq y_{j}\}\end{gathered}start_ROW start_CELL caligraphic_N start_POSTSUPERSCRIPT italic_u italic_n italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = caligraphic_N start_POSTSUPERSCRIPT italic_c italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT โช caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT end_CELL end_ROW start_ROW start_CELL over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = { roman_cos roman_ฮ ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_w start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) | italic_j โ italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } โช { roman_cos roman_ฮ ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) | italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT โ italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } end_CELL end_ROW(7)
|
| 88 |
+
|
| 89 |
+
๐ฎ^i n subscript superscript^๐ฎ ๐ ๐\mathcal{\hat{S}}^{n}_{i}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT can be computed from ๐ฉ i uโขnโขi subscript superscript ๐ฉ ๐ข ๐ ๐ ๐\mathcal{N}^{uni}_{i}caligraphic_N start_POSTSUPERSCRIPT italic_u italic_n italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT using various methods such as Eqs. [4](https://arxiv.org/html/2203.11593v2#S3.E4 "In 3 Methodology โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") and [5](https://arxiv.org/html/2203.11593v2#S3.E5 "In 3 Methodology โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition"). Decomposing SC and PG can lead to wide research directions in FR. As a result, the unified loss with UNPG is defined as:
|
| 90 |
+
|
| 91 |
+
โ i uโขnโขi=โlogโกP i=โlogโกe ฮณโขs i p e ฮณโขs i p+โj=1 L cโขl e ฮณโขs j n+โk=1 L mโขl e ฮณโขs k n subscript superscript โ ๐ข ๐ ๐ ๐ subscript ๐ ๐ superscript ๐ ๐พ subscript superscript ๐ ๐ ๐ superscript ๐ ๐พ subscript superscript ๐ ๐ ๐ superscript subscript ๐ 1 superscript ๐ฟ ๐ ๐ superscript ๐ ๐พ subscript superscript ๐ ๐ ๐ superscript subscript ๐ 1 superscript ๐ฟ ๐ ๐ superscript ๐ ๐พ subscript superscript ๐ ๐ ๐\begin{split}\mathcal{L}^{uni}_{i}=-\log P_{i}=-\log{\frac{e^{{\gamma s^{p}_{i% }}}}{e^{{\gamma s^{p}_{i}}}+\sum_{j=1}^{L^{cl}}e^{{\gamma s^{n}_{j}}}+\sum_{k=% 1}^{L^{ml}}e^{{\gamma s^{n}_{k}}}}}\end{split}start_ROW start_CELL caligraphic_L start_POSTSUPERSCRIPT italic_u italic_n italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = - roman_log italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = - roman_log divide start_ARG italic_e start_POSTSUPERSCRIPT italic_ฮณ italic_s start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUPERSCRIPT end_ARG start_ARG italic_e start_POSTSUPERSCRIPT italic_ฮณ italic_s start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUPERSCRIPT + โ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_L start_POSTSUPERSCRIPT italic_c italic_l end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT italic_ฮณ italic_s start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUPERSCRIPT + โ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_L start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT italic_ฮณ italic_s start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUPERSCRIPT end_ARG end_CELL end_ROW(8)
|
| 92 |
+
|
| 93 |
+
where L cโขl=|๐ฉ i cโขl|superscript ๐ฟ ๐ ๐ subscript superscript ๐ฉ ๐ ๐ ๐ L^{cl}=|\mathcal{N}^{cl}_{i}|italic_L start_POSTSUPERSCRIPT italic_c italic_l end_POSTSUPERSCRIPT = | caligraphic_N start_POSTSUPERSCRIPT italic_c italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | and L mโขl=|๐ฉ mโขl|superscript ๐ฟ ๐ ๐ superscript ๐ฉ ๐ ๐ L^{ml}=|\mathcal{N}^{ml}|italic_L start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT = | caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT |. Note that the normalization term in Eq. [8](https://arxiv.org/html/2203.11593v2#S3.E8 "In 3 Methodology โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") uses the scores from ๐ฉ mโขl superscript ๐ฉ ๐ ๐\mathcal{N}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT. Fig. [2](https://arxiv.org/html/2203.11593v2#S3.F2 "Figure 2 โฃ 3 Methodology โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") visualizes Eq. [8](https://arxiv.org/html/2203.11593v2#S3.E8 "In 3 Methodology โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition"), where UNPG uses the similarity score matrix obtained from ๐ฉ mโขl superscript ๐ฉ ๐ ๐\mathcal{N}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT at each mini-batch and then duplicates it by the size of the mini-batch.
|
| 94 |
+
|
| 95 |
+

|
| 96 |
+
|
| 97 |
+
Figure 3: Geometrical interpretation of feature space associated with similarity space. (a) As ideal behavior of the loss function, it imposes a large loss in feature space with low discriminability. A shading area in the same color represents the target region of the same class. ฮธ mโขaโขx p subscript superscript ๐ ๐ ๐ ๐ ๐ฅ\theta^{p}_{max}italic_ฮธ start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT and ฮธ mโขiโขn n subscript superscript ๐ ๐ ๐ ๐ ๐\theta^{n}_{min}italic_ฮธ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT are the respective angles of max positive and min negative pairs in the feature space. ๐ฎ p superscript ๐ฎ ๐\mathcal{S}^{p}caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ n superscript ๐ฎ ๐\mathcal{S}^{n}caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT represent similarity sets. (b) In spite of being equally low discriminative, a very small loss is given by vanilla loss (e.g., norm-softmax). ๐ 1 subscript ๐ 1\bm{w}_{1}bold_italic_w start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and ๐ 2 subscript ๐ 2\bm{w}_{2}bold_italic_w start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are the normalized weight vectors of classes 1 and 2, while ๐ 1 subscript ๐ 1\bm{x}_{1}bold_italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and ๐ 2 subscript ๐ 2\bm{x}_{2}bold_italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are the normalized feature vector. ฮธ^mโขaโขx p subscript superscript^๐ ๐ ๐ ๐ ๐ฅ\hat{\theta}^{p}_{max}over^ start_ARG italic_ฮธ end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT and ฮธ^mโขiโขn n subscript superscript^๐ ๐ ๐ ๐ ๐\hat{\theta}^{n}_{min}over^ start_ARG italic_ฮธ end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT represent the angle of max positive and min negative pairs in ๐ฎ^pโ๐ฎ p superscript^๐ฎ ๐ superscript ๐ฎ ๐\hat{\mathcal{S}}^{p}\subset\mathcal{S}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT โ caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^nโ๐ฎ n superscript^๐ฎ ๐ superscript ๐ฎ ๐\hat{\mathcal{S}}^{n}\subset\mathcal{S}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT โ caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, respectively. (c) Mismatch between ๐ฎ p superscript ๐ฎ ๐\mathcal{S}^{p}caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT is reduced by using a marinal classification loss (e.g., ArcFace). However, still a small loss is given because of a mismatch between ๐ฎ n superscript ๐ฎ ๐\mathcal{S}^{n}caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. (d) Marginal classification loss with UNPG behaves closest to ideal by alleviating mismatch between ๐ฎ n superscript ๐ฎ ๐\mathcal{S}^{n}caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT.
|
| 98 |
+
|
| 99 |
+
Algorithm 1 Noise Negative Pair Filtering.
|
| 100 |
+
|
| 101 |
+
s j nโ๐ฎ^n subscript superscript ๐ ๐ ๐ superscript^๐ฎ ๐ s^{n}_{j}\in\mathcal{\hat{S}}^{n}italic_s start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT โ over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT
|
| 102 |
+
from
|
| 103 |
+
|
| 104 |
+
๐ฉ mโขl superscript ๐ฉ ๐ ๐\mathcal{N}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT
|
| 105 |
+
, wisker size
|
| 106 |
+
|
| 107 |
+
r ๐ r italic_r
|
| 108 |
+
|
| 109 |
+
Extract the lower 25% similarity
|
| 110 |
+
|
| 111 |
+
s l n superscript subscript ๐ ๐ ๐ s_{l}^{n}italic_s start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT
|
| 112 |
+
|
| 113 |
+
Extract the upper 25% similarity
|
| 114 |
+
|
| 115 |
+
s u n superscript subscript ๐ ๐ข ๐ s_{u}^{n}italic_s start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT
|
| 116 |
+
|
| 117 |
+
IQR
|
| 118 |
+
|
| 119 |
+
=s u nโs l n absent subscript superscript ๐ ๐ ๐ข subscript superscript ๐ ๐ ๐=s^{n}_{u}-s^{n}_{l}= italic_s start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT - italic_s start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT
|
| 120 |
+
|
| 121 |
+
Min
|
| 122 |
+
|
| 123 |
+
=s l nโrโ=s^{n}_{l}-r*= italic_s start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT - italic_r โ
|
| 124 |
+
IQR, Max
|
| 125 |
+
|
| 126 |
+
=s u n+rโ=s^{n}_{u}+r*= italic_s start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT + italic_r โ
|
| 127 |
+
IQR
|
| 128 |
+
|
| 129 |
+
๐ฉ~mโขl={(๐ i,๐ j)|y iโ y jโงs j n>=\mathcal{\tilde{N}}^{ml}=\{(\bm{x}_{i},\bm{x}_{j})|y_{i}\neq y_{j}\wedge s^{n}% _{j}>=over~ start_ARG caligraphic_N end_ARG start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT = { ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) | italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT โ italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT โง italic_s start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT > =
|
| 130 |
+
Min
|
| 131 |
+
|
| 132 |
+
โงs j n<=subscript superscript ๐ ๐ ๐ absent\wedge s^{n}_{j}<=โง italic_s start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT < =
|
| 133 |
+
Max
|
| 134 |
+
|
| 135 |
+
}}\}}
|
| 136 |
+
|
| 137 |
+
๐ฉ~mโขl superscript~๐ฉ ๐ ๐\mathcal{\tilde{N}}^{ml}over~ start_ARG caligraphic_N end_ARG start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT
|
| 138 |
+
|
| 139 |
+
Noise Negative Pair Filtering. According to our preliminary experiments, directly utilizing ๐ฉ mโขl superscript ๐ฉ ๐ ๐\mathcal{N}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT produced by MLPG causes performance degradation and divergence of a loss because many noise-negative pairs cause a side-effect. We assumed that there are two types of noise pairs: too-easy and too-hard pairs. In the former case, FR models need not pay attention to the pairs but they do, owing to the size of the pairs. In the latter case, FR models cannot allow them because they exceed the representation power of the models. To address this problem, we developed noise-negative pair filtering using a box and whisker algorithm[[31](https://arxiv.org/html/2203.11593v2#bib.bib31)] as follows:
|
| 140 |
+
|
| 141 |
+
As a result, UNPG adopting Algorithm 1 is defined as:
|
| 142 |
+
|
| 143 |
+
๐ฉ i uโขnโขi=๐ฉ i cโขlโช๐ฉ~mโขl subscript superscript ๐ฉ ๐ข ๐ ๐ ๐ subscript superscript ๐ฉ ๐ ๐ ๐ superscript~๐ฉ ๐ ๐\mathcal{N}^{uni}_{i}=\mathcal{N}^{cl}_{i}\cup\mathcal{\tilde{N}}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_u italic_n italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = caligraphic_N start_POSTSUPERSCRIPT italic_c italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT โช over~ start_ARG caligraphic_N end_ARG start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT(9)
|
| 144 |
+
|
| 145 |
+
Geometrical Interpretation of Feature Space We interpret the role of UNPG by associating a feature space, as shown in Fig. [3](https://arxiv.org/html/2203.11593v2#S3.F3 "Figure 3 โฃ 3 Methodology โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition"). To form WDFS satisfying inf ๐ฎ p>sup ๐ฎ n infimum superscript ๐ฎ ๐ supremum superscript ๐ฎ ๐\inf{\mathcal{S}^{p}}>\sup{\mathcal{S}^{n}}roman_inf caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT > roman_sup caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, a loss function should assign a large loss in the feature space with low discriminability, whereas it should assign a small loss, and vice versa. The unified loss function in Eq. [1](https://arxiv.org/html/2203.11593v2#S3.E1 "In 3 Methodology โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") follows this intent but fails to form WDFS because of the mismatch between similarity sets of the sampled pairs and all pairs. Fig. [3](https://arxiv.org/html/2203.11593v2#S3.F3 "Figure 3 โฃ 3 Methodology โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") (a) depicts the ideal behavior of a loss function that assigns a large loss in the feature space with low discriminability for inf ๐ฎ pโซsup ๐ฎ n much-greater-than infimum superscript ๐ฎ ๐ supremum superscript ๐ฎ ๐\inf{\mathcal{S}^{p}}\gg\sup{\mathcal{S}^{n}}roman_inf caligraphic_S start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT โซ roman_sup caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. In contrast, in Fig. [3](https://arxiv.org/html/2203.11593v2#S3.F3 "Figure 3 โฃ 3 Methodology โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") (b), a small loss is assigned in the feature space with low discriminability because sampled ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT are well-separated. This problem is alleviated using a similarity score with a margin, as shown in Fig. [3](https://arxiv.org/html/2203.11593v2#S3.F3 "Figure 3 โฃ 3 Methodology โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") (c). This makes ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT informative and worthy of training, as the interval of ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT shifts to the left. Fig. [3](https://arxiv.org/html/2203.11593v2#S3.F3 "Figure 3 โฃ 3 Methodology โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") (d) shows the effect of UNPG. The interval of ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT becomes wider as more negative pairs are included in ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. Consequently, a large loss is assigned, satisfying inf ๐ฎ^p>sup ๐ฎ^n infimum superscript^๐ฎ ๐ supremum superscript^๐ฎ ๐\inf{\mathcal{\hat{S}}^{p}}>\sup{\mathcal{\hat{S}}^{n}}roman_inf over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT > roman_sup over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT.
|
| 146 |
+
|
| 147 |
+
4 Experiments
|
| 148 |
+
-------------
|
| 149 |
+
|
| 150 |
+
Table 1: A brief overview of FR datasets. (P) and (G) refer to the probe set the gallery set on MegaFace, respectively.
|
| 151 |
+
|
| 152 |
+
Train# Identities# Images
|
| 153 |
+
MS1M-V2[[4](https://arxiv.org/html/2203.11593v2#bib.bib4)]85K 5.8M
|
| 154 |
+
K-FACE:T4[[2](https://arxiv.org/html/2203.11593v2#bib.bib2)]370 3.8M
|
| 155 |
+
Test# Identities# Images
|
| 156 |
+
IJB-B[[37](https://arxiv.org/html/2203.11593v2#bib.bib37)]1,845 76.8K
|
| 157 |
+
IJB-C[[17](https://arxiv.org/html/2203.11593v2#bib.bib17)]3,531 148.8K
|
| 158 |
+
MegaFace (P)[[12](https://arxiv.org/html/2203.11593v2#bib.bib12)]530 100K
|
| 159 |
+
MegaFace (G)[[12](https://arxiv.org/html/2203.11593v2#bib.bib12)]690K 1M
|
| 160 |
+
LFW[[10](https://arxiv.org/html/2203.11593v2#bib.bib10)]5,749 13,233
|
| 161 |
+
CFPFP[[24](https://arxiv.org/html/2203.11593v2#bib.bib24)]500 7,000
|
| 162 |
+
AgeDB-30[[19](https://arxiv.org/html/2203.11593v2#bib.bib19)]568 16,488
|
| 163 |
+
CALFW[[41](https://arxiv.org/html/2203.11593v2#bib.bib41)]5,749 12,174
|
| 164 |
+
Test[[2](https://arxiv.org/html/2203.11593v2#bib.bib2)]# Pairs# Variance
|
| 165 |
+
K-FACE:Q1 1,000 Very Low
|
| 166 |
+
K-FACE:Q2 10K Low
|
| 167 |
+
K-FACE:Q3 10K Middle
|
| 168 |
+
K-FACE:Q4 10K High
|
| 169 |
+
|
| 170 |
+
Table 2: Verification accuracy of TAR@FAR on IJB-B and IJB-C. โ*โ indicates results from the original paper.
|
| 171 |
+
|
| 172 |
+
Method Backbone IJB-B(TAR@FAR)IJB-C(TAR@FAR)
|
| 173 |
+
1e-6 1e-5 1e-4 1e-3 1e-2 1e-6 1e-5 1e-4 1e-3 1e-2
|
| 174 |
+
VGGFace2*[[1](https://arxiv.org/html/2203.11593v2#bib.bib1)]R50-67.10 80.00---74.70 84.00--
|
| 175 |
+
Circle-loss*[[27](https://arxiv.org/html/2203.11593v2#bib.bib27)]R34------86.78 93.44 96.04-
|
| 176 |
+
Circle-loss*[[27](https://arxiv.org/html/2203.11593v2#bib.bib27)]R100------89.60 93.95 96.29-
|
| 177 |
+
ArcFace*[[4](https://arxiv.org/html/2203.11593v2#bib.bib4)]R100--94.20----95.60--
|
| 178 |
+
MagFace*[[18](https://arxiv.org/html/2203.11593v2#bib.bib18)]R100 42.32 90.36 94.51--90.24 94.08 95.97--
|
| 179 |
+
Triplet-loss R34 4.42 12.57 32.65 61.33 88.78 4.04 15.32 36.86 66.46 90.77
|
| 180 |
+
contrastive-loss R34 33.10 59.40 72.18 81.98 90.11 57.84 66.41 76.16 85.03 92.21
|
| 181 |
+
CosFace[[34](https://arxiv.org/html/2203.11593v2#bib.bib34)]R34 39.70 87.47 93.55 95.71 97.05 85.95 92.57 95.23 96.81 97.94
|
| 182 |
+
Cos+UNPG R34 43.33 87.51 93.58 95.96 97.24 87.84 92.49 95.33 96.94 98.06
|
| 183 |
+
ArcFace R34 40.61 86.28 93.38 95.74 97.22 85.47 92.21 95.08 96.79 97.94
|
| 184 |
+
Arc+Triplet R34 38.31 86.46 93.22 95.72 97.28 86.40 92.19 94.97 96.68 97.94
|
| 185 |
+
Arc+Contrastive R34 38.07 86.54 93.03 95.61 97.33 85.21 92.54 94.86 96.60 98.01
|
| 186 |
+
Arc+UNPG R34 40.27 88.05 93.66 95.96 97.17 87.99 93.02 95.33 96.88 97.92
|
| 187 |
+
CosFace R100 42.27 89.38 94.39 96.17 97.35 86.56 94.42 96.35 97.57 98.26
|
| 188 |
+
Cos+UNPG R100 49.13 90.61 94.99 96.50 97.36 86.95 94.48 96.39 97.57 98.24
|
| 189 |
+
ArcFace R100 40.68 89.99 94.89 96.40 97.59 86.57 93.93 96.25 97.43 98.31
|
| 190 |
+
Arc+UNPG R100 44.03 90.57 95.04 96.60 97.70 88.06 94.47 96.33 97.53 98.39
|
| 191 |
+
MagFace R100 43.71 89.03 93.99 96.11 97.32 87.19 93.30 95.54 97.00 98.05
|
| 192 |
+
Mag+UNPG R100 46.33 90.93 95.21 96.50 97.63 90.01 94.70 96.38 97.51 98.32
|
| 193 |
+
|
| 194 |
+
### 4.1 Implementation Details
|
| 195 |
+
|
| 196 |
+
Datasets. For training, MS1M-V2[[4](https://arxiv.org/html/2203.11593v2#bib.bib4)] and K-FACE:T4[[11](https://arxiv.org/html/2203.11593v2#bib.bib11)] datasets were employed. MS1M-V2, a semi-automatically refined version of MS-Celeb-1M[[5](https://arxiv.org/html/2203.11593v2#bib.bib5)], has 5.8M images and 85K identities. K-FACE:T4 is a preprocessed version of K-FACE[[2](https://arxiv.org/html/2203.11593v2#bib.bib2)] utilized in MixFace[[11](https://arxiv.org/html/2203.11593v2#bib.bib11)] and has 3.8M images and 370 identities. For testing, several benchmark datasets (IJB-B[[37](https://arxiv.org/html/2203.11593v2#bib.bib37)], IJB-C[[17](https://arxiv.org/html/2203.11593v2#bib.bib17)], MegaFace[[12](https://arxiv.org/html/2203.11593v2#bib.bib12)], LFW[[10](https://arxiv.org/html/2203.11593v2#bib.bib10)], CFPFP[[24](https://arxiv.org/html/2203.11593v2#bib.bib24)], AgeDB-30[[19](https://arxiv.org/html/2203.11593v2#bib.bib19)], CALFW[[41](https://arxiv.org/html/2203.11593v2#bib.bib41)], and K-FACE:Q1-Q4[[11](https://arxiv.org/html/2203.11593v2#bib.bib11)]) were used to evaluate FR models. Table [1](https://arxiv.org/html/2203.11593v2#S4.T1 "Table 1 โฃ 4 Experiments โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") summarizes the datasets used in our experiments.
|
| 197 |
+
|
| 198 |
+
Training. For preprocessing, face images were resized to 112ร112 112 112 112\times 112 112 ร 112 and normalized using the mean (0.485, 0.456, 0.406) and standard deviations (0.229, 0.224, 0.225). For data augmentation, a horizontal flip was applied with a 50% of chance. All experiments were performed using two NVIDIA-RTX A6000 GPUs with a mini-batch size of 512. ResNet-34 (R34) and ResNet-100 (R100) were used as backbone models. We re-implemented the state-of-the-art models: CosFace[[34](https://arxiv.org/html/2203.11593v2#bib.bib34)], ArcFace[[4](https://arxiv.org/html/2203.11593v2#bib.bib4)], and MagFace[[18](https://arxiv.org/html/2203.11593v2#bib.bib18)].
|
| 199 |
+
|
| 200 |
+
Table 3: Identification results on MegaFace datasets with ResNet-100 backbone except for AdaCos. โ*โ indicates the results from the original paper.
|
| 201 |
+
|
| 202 |
+
Table 4: Verification accuracy on LFW, CFP-FP, AgeDB-30, and CALFW with ResNet-100 backbone.
|
| 203 |
+
|
| 204 |
+
Method LFW CFP-FP AgeDB CALFW
|
| 205 |
+
Circle-loss*99.73 96.02--
|
| 206 |
+
ArcFace*99.82--95.45
|
| 207 |
+
MagFace*99.83 98.46 98.17 96.15
|
| 208 |
+
CosFace 99.83 97.72 98.11 96.11
|
| 209 |
+
Cos+UNPG 99.81 98.50 98.31 96.15
|
| 210 |
+
ArcFace 99.83 98.60 98.23 96.11
|
| 211 |
+
Arc+UNPG 99.83 98.60 98.25 96.13
|
| 212 |
+
MagFace 99.81 98.62 98.30 96.15
|
| 213 |
+
Mag+UNPG 99.81 98.52 98.38 96.21
|
| 214 |
+
|
| 215 |
+
Table 5: Verification accuracy of TAR@FAR on K-FACE with ResNet-34 backbone.
|
| 216 |
+
|
| 217 |
+
The hyper-parameters used in our experiments were as follows: In ArcFace and CosFace, scale factor ฮณ=64 ๐พ 64\gamma=64 italic_ฮณ = 64 and margin m=0.5 ๐ 0.5 m=0.5 italic_m = 0.5 were set. In MagFace, ฮณ=64,l a=10,u a=110,l m=0.4,l m=0.8,ฮป g=35 formulae-sequence ๐พ 64 formulae-sequence subscript ๐ ๐ 10 formulae-sequence subscript ๐ข ๐ 110 formulae-sequence subscript ๐ ๐ 0.4 formulae-sequence subscript ๐ ๐ 0.8 subscript ๐ ๐ 35\gamma=64,l_{a}=10,u_{a}=110,l_{m}=0.4,l_{m}=0.8,\lambda_{g}=35 italic_ฮณ = 64 , italic_l start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT = 10 , italic_u start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT = 110 , italic_l start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = 0.4 , italic_l start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = 0.8 , italic_ฮป start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT = 35 were used. For K-FACE, SN-pair[[11](https://arxiv.org/html/2203.11593v2#bib.bib11)] and circle-loss[[27](https://arxiv.org/html/2203.11593v2#bib.bib27)] employed ฮณ=64 ๐พ 64\gamma=64 italic_ฮณ = 64 and ฮณ=32,m=0.25 formulae-sequence ๐พ 32 ๐ 0.25\gamma=32,m=0.25 italic_ฮณ = 32 , italic_m = 0.25, respectively. In MixFace[[11](https://arxiv.org/html/2203.11593v2#bib.bib11)], ฯต=1โขeโ22 italic-ฯต 1 ๐ 22\epsilon=1e-22 italic_ฯต = 1 italic_e - 22 and m=0.5 ๐ 0.5 m=0.5 italic_m = 0.5 were set. In MS-loss[[35](https://arxiv.org/html/2203.11593v2#bib.bib35)], ฮฑ=2,ฮณ=0.5,ฮฒ=50 formulae-sequence ๐ผ 2 formulae-sequence ๐พ 0.5 ๐ฝ 50\alpha=2,\gamma=0.5,\beta=50 italic_ฮฑ = 2 , italic_ฮณ = 0.5 , italic_ฮฒ = 50 were used. Triplet loss employed m=0.5 ๐ 0.5 m=0.5 italic_m = 0.5. In contrastive loss, positive and negative margins were set to 0 and 1, respectively. Finally, in UNPG, the wisker size r=1.0 ๐ 1.0 r=1.0 italic_r = 1.0 was used with ResNet-34, whereas r=1.5 ๐ 1.5 r=1.5 italic_r = 1.5 was used ResNet-100. The stochastic gradient descent (SGD) optimizer was utilized in conjunction with a cosine annealing scheduler[[16](https://arxiv.org/html/2203.11593v2#bib.bib16)] to control the learning rate, which started from 0.1. The momentum, weight decay, and warm-up epochs were set to 0.9, 0.0005, and 3, respectively. The maximum number of training epochs was set to 20 for all models, except that it was set to 25 with MagFace for a fair comparison. The size of the deep feature space extracted from the backbone model was set to 512.
|
| 218 |
+
|
| 219 |
+
Test. Cosine similarity was used as a similarity score. Different evaluation metrics were applied depending on the FR tasks. In the verification task (1:1), verification accuracy using the best threshold was exploited for a dataset that has a small number of test images with the same ratio between positive and negative pairs, such as LFW, CFP-FP, AgeDB-30, CALFW, and CPLFW. Otherwise, TAR@FAR was used on IJB-B, IJB-C, and K-FACE. In the identification task (1:N) on MegaFace, rank-1accuracy was utilized.
|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
|
| 223 |
+
Figure 4: Comparison of overlapping similarities for positive and negative pairs with and without UNPG.
|
| 224 |
+
|
| 225 |
+

|
| 226 |
+
|
| 227 |
+
Figure 5: (a) Effects of noise negative pair filtering in UNPG with ResNet-34 (b) Examples of three negative pair types in ascending order of similarity scores.
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
|
| 231 |
+
Figure 6: Effects of backbone capacity and whisker size on IJB-B (verification), IJB-C (verification), and MegaFace (identification).
|
| 232 |
+
|
| 233 |
+
### 4.2 Evaluation Results
|
| 234 |
+
|
| 235 |
+
Results on IJB-B and IJB-C. IJB-B consists of 21.8 K images of 1,845 subjects and 55 K frames of 7,011 videos. IJB-C, an extended version of IJB-B, contains 31.3 K images of 3,531 subjects and 117.5 K frames of 11,799 videos. 10 k / 8 M and 19 K / 15 M of positive / negative pairs in IJB-B and IJB-C were used for 1:1 verification. Owing to the severe imbalance between positive and negative pairs, performance was measured by TAR@FAR at different intervals such as [1e-6, 1e-5, 1e-4, 1e-3, 1e-2]. As shown in Table [2](https://arxiv.org/html/2203.11593v2#S4.T2 "Table 2 โฃ 4 Experiments โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition"), all FR models with UNPG improved at every interval compared to those without UNPG. In particular, TAR@(FAR=1e-4), an interval widely used in FR improved consistently. For example, Mag+UNPG obtained gains of 1.22% and 0.84% in IJB-B and IJB-C, respectively, compared to MagFace, and gains of 0.7% and 0.41%, respectively, compared to MagFace*.
|
| 236 |
+
|
| 237 |
+
Results on MegaFace. MegaFace consists of a gallery set of 1 M images with 690 K classes and probe photos of 100 K images with 530 classes. We followed the test protocol of ArcFace[[4](https://arxiv.org/html/2203.11593v2#bib.bib4)]. We removed noisy images and measured rank-1 accuracy for the 1 M distractor after following the identification scenarios using the devkit provided by MegaFace. Table [3](https://arxiv.org/html/2203.11593v2#S4.T3 "Table 3 โฃ 4.1 Implementation Details โฃ 4 Experiments โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") present the results of this study. FR models with UNPG performed better than those without it. ArcFace and CosFace using UNPG obtained gains of 0.26% and 0.19%, respectively, compared to those without it.
|
| 238 |
+
|
| 239 |
+
Results on LFW, CFP-FP, AgeDB-30, and CALFW. FR on LFW, CFP-FP, AgeDB-30, and CALFW is straightforward. Thus, the performance was saturated. LFW, AgeDB-30, and CALFW contain 6,000 images, and CFP-FP has 6,000 images. They have 1:1 ratios between the positive and negative pairs. Verification accuracy was employed with the best threshold separating the positive and negative pairs. In Table [4](https://arxiv.org/html/2203.11593v2#S4.T4 "Table 4 โฃ 4.1 Implementation Details โฃ 4 Experiments โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition"), the FR models with UNPG obtained competitive performance on the four datasets.
|
| 240 |
+
|
| 241 |
+
Results on K-FACE. K-FACE focuses on FR under fine-grained conditions. It consists of 4.3 M images with 6 accessories, 30 illuminations, 3 expressions, and 20 poses for 400 persons. We adopted the same training and test splits used in MixFace [41]. The training split was composed of 3.8 M images with 370 persons. In particular, the test split, including the remaining 30 persons, was partitioned into Q1, Q2, Q3, and Q4. The number next to Q indicates the variance of conditions where it increases as more conditions are included. Q4 is the most challenging task, whereas Q1 is the most straightforward task among the four. Table [5](https://arxiv.org/html/2203.11593v2#S4.T5 "Table 5 โฃ 4.1 Implementation Details โฃ 4 Experiments โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") presents the results of the FR models. Surprisingly, ArcFace with UNPG outperformed the other FR models on Q1, Q2, Q3, and Q4. Specifically, it obtained gains of 25.38%, 19.34%, and 15.33% in TAR@(FAR=1e-4) compared to the circle loss.
|
| 242 |
+
|
| 243 |
+
### 4.3 Analysis
|
| 244 |
+
|
| 245 |
+
Does it sufficiently satisfy WDFS? We conclude that UNPG helps FR models to form WDFS by reducing the gap between ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. As shown in Fig. [4](https://arxiv.org/html/2203.11593v2#S4.F4 "Figure 4 โฃ 4.1 Implementation Details โฃ 4 Experiments โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition"), we measured the number of overlapping similarity scores between ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT using ArcFace, CosFace, and MagFace with and without UNPG after training with R100. We randomly sampled 256 positive pairs and 256 most hard negative pairs at each iteration from MS1M-V2. After 1000 iterations, we generated ๐ฎ^p superscript^๐ฎ ๐\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and ๐ฎ^n superscript^๐ฎ ๐\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, each with a total of 257,992, and calculated the overlap between them using a histogram. Obviously, applying UNPG reduced the gaps of 260 (ArcFace), 321 (CosFace), and 440 (MagFace) consistently. This proves the effect of UNPG, as expected.
|
| 246 |
+
|
| 247 |
+
Effect of Noise-negative Pair Filtering. To approximate WDFS, ๐ฉ mโขl superscript ๐ฉ ๐ ๐\mathcal{N}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT was assumed to include extremely hard negative pairs because it can produce similarity scores similar to sup ๐ฎ n supremum superscript ๐ฎ ๐\sup{\mathcal{S}^{n}}roman_sup caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. In Fig. [5](https://arxiv.org/html/2203.11593v2#S4.F5 "Figure 5 โฃ 4.1 Implementation Details โฃ 4 Experiments โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") (a), we observe that an FR model using ๐ฉ mโขl superscript ๐ฉ ๐ ๐\mathcal{N}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT without filtering (+UNPG w/o filtering) at each iteration leads to performance degradation and the divergence of a loss on LFW, whereas it achieves better performance and convergence of a loss with filtering (+UNPG). Although FR models should adequately distinguish too-hard negative pairs ultimately, we argue that it causes adverse effects using a model lacking representation power to cover them. Fig. [5](https://arxiv.org/html/2203.11593v2#S4.F5 "Figure 5 โฃ 4.1 Implementation Details โฃ 4 Experiments โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition") (b) shows certain negative pairs, categorized as too-easy, informative, and too-hard pairs, sampled from MS1M-V2, where the similarity scores are computed using ArcFace with UNPG. It is difficult to distinguish two hard pairs, even for humans, because they are very similar. However, too-easy pairs are useless because they are already well separated. Consequently, it is reasonable to focus on the informative pairs in ๐ฉ mโขl superscript ๐ฉ ๐ ๐\mathcal{N}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT. We can deal with too-hard pairs by enlarging the model capacity, as depicted in Fig. [6](https://arxiv.org/html/2203.11593v2#S4.F6 "Figure 6 โฃ 4.1 Implementation Details โฃ 4 Experiments โฃ Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition"). We conducted experiments using ArcFace with different backbones, R34 and R100, on IJB-B, IJB-C and MegaFace for verification and identification. In R34, the highest performance was obtained at whisker size r=1.0 ๐ 1.0 r=1.0 italic_r = 1.0 on all datasets, whereas it was obtained at r=1.5 ๐ 1.5 r=1.5 italic_r = 1.5 in R100 with a gain of approximately 0.2%. This reveals that the informative range determined by whisker size r ๐ r italic_r also increases as a model has a large representation power.
|
| 248 |
+
|
| 249 |
+
5 Conclusion
|
| 250 |
+
------------
|
| 251 |
+
|
| 252 |
+
This paper is based on two insights. First, from a unified perspective, CL and ML have the same purpose of approaching WDFS, except for PG. Second, CL and ML show a mismatch between two similarity distributions of sampled pairs and all negative pairs. Based on these insights, we developed UNPG by combining two PG strategies (MLPG and CLPG) to alleviate the mismatch. Filtering was also applied to remove negative pairs in both too-easy and too-hard pairs. It was observed that UNPG increases the ability to learn existing FR models compared to MLPG and CLPG by providing more informative pairs. Finally, we suggest two research directions in FR: 1) pair generation strategies in the qualitative aspect and 2) loss functions considering the capability of representation power.
|
| 253 |
+
|
| 254 |
+
6 Acknowledgement
|
| 255 |
+
-----------------
|
| 256 |
+
|
| 257 |
+
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2021R1I1A3052815) and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2021-0-00921).
|
| 258 |
+
|
| 259 |
+
References
|
| 260 |
+
----------
|
| 261 |
+
|
| 262 |
+
* [1] Q.Cao, L.Shen, W.Xie, O.M. Parkhi, and A.Zisserman. Vggface2: A dataset for recognising faces across pose and age. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pages 67โ74. IEEE, 2018.
|
| 263 |
+
* [2] Y.Choi, H.Park, G.P. Nam, H.Kim, H.Choi, J.Cho, and I.-J. Kim. K-face: A large-scale kist face database in consideration with unconstrained environments. arXiv preprint arXiv:2103.02211, 2021.
|
| 264 |
+
* [3] S.Chopra, R.Hadsell, and Y.LeCun. Learning a similarity metric discriminatively, with application to face verification. In CVPR, volume 1, pages 539โ546. IEEE, 2005.
|
| 265 |
+
* [4] J.Deng, J.Guo, N.Xue, and S.Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In CVPR, pages 4690โ4699, 2019.
|
| 266 |
+
* [5] Y.Guo, L.Zhang, Y.Hu, X.He, and J.Gao. Ms-celeb-1m: A dataset and benchmark for large-scale face recognition. In ECCV, pages 87โ102. Springer, 2016.
|
| 267 |
+
* [6] R.Hadsell, S.Chopra, and Y.LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, volume 2, pages 1735โ1742. IEEE, 2006.
|
| 268 |
+
* [7] K.He, X.Zhang, S.Ren, and J.Sun. Deep residual learning for image recognition. In CVPR, pages 770โ778, 2016.
|
| 269 |
+
* [8] E.Hoffer and N.Ailon. Deep metric learning using triplet network. In International workshop on similarity-based pattern recognition, pages 84โ92. Springer, 2015.
|
| 270 |
+
* [9] J.Hu, L.Shen, and G.Sun. Squeeze-and-excitation networks. In CVPR, pages 7132โ7141, 2018.
|
| 271 |
+
* [10] G.B. Huang, M.Mattar, T.Berg, and E.Learned-Miller. Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Workshop on faces inโReal-LifeโImages: detection, alignment, and recognition, 2008.
|
| 272 |
+
* [11] J.Jung, S.Son, J.Park, Y.Park, S.Lee, and H.-S. Oh. Mixface: Improving face verification focusing on fine-grained conditions. arXiv preprint arXiv:2111.01717, 2021.
|
| 273 |
+
* [12] I.Kemelmacher-Shlizerman, S.M. Seitz, D.Miller, and E.Brossard. The megaface benchmark: 1 million faces for recognition at scale. In CVPR, pages 4873โ4882, 2016.
|
| 274 |
+
* [13] A.Krizhevsky, I.Sutskever, and G.E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097โ1105, 2012.
|
| 275 |
+
* [14] W.Liu, Y.Wen, Z.Yu, M.Li, B.Raj, and L.Song. Sphereface: Deep hypersphere embedding for face recognition. In CVPR, pages 212โ220, 2017.
|
| 276 |
+
* [15] W.Liu, Y.Wen, Z.Yu, and M.Yang. Large-margin softmax loss for convolutional neural networks. In ICML, volume 2, page 7, 2016.
|
| 277 |
+
* [16] I.Loshchilov and F.Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
|
| 278 |
+
* [17] B.Maze, J.Adams, J.A. Duncan, N.Kalka, T.Miller, C.Otto, A.K. Jain, W.T. Niggel, J.Anderson, J.Cheney, et al. Iarpa janus benchmark-c: Face dataset and protocol. In 2018 International Conference on Biometrics, pages 158โ165. IEEE, 2018.
|
| 279 |
+
* [18] Q.Meng, S.Zhao, Z.Huang, and F.Zhou. Magface: A universal representation for face recognition and quality assessment. In CVPR, pages 14225โ14234, 2021.
|
| 280 |
+
* [19] S.Moschoglou, A.Papaioannou, C.Sagonas, J.Deng, I.Kotsia, and S.Zafeiriou. Agedb: the first manually collected, in-the-wild age database. In CVPRW, pages 51โ59, 2017.
|
| 281 |
+
* [20] H.Oh Song, Y.Xiang, S.Jegelka, and S.Savarese. Deep metric learning via lifted structured feature embedding. In CVPR, pages 4004โ4012, 2016.
|
| 282 |
+
* [21] O.M. Parkhi, A.Vedaldi, and A.Zisserman. Deep face recognition. In BMVC, pages 41.1โ41.12, 2015.
|
| 283 |
+
* [22] R.Ranjan, C.D. Castillo, and R.Chellappa. L2-constrained softmax loss for discriminative face verification. arXiv preprint arXiv:1703.09507, 2017.
|
| 284 |
+
* [23] F.Schroff, D.Kalenichenko, and J.Philbin. Facenet: A unified embedding for face recognition and clustering. In CVPR, pages 815โ823, 2015.
|
| 285 |
+
* [24] S.Sengupta, J.-C. Chen, C.Castillo, V.M. Patel, R.Chellappa, and D.W. Jacobs. Frontal to profile face verification in the wild. In 2016 IEEE Winter Conference on Applications of Computer Vision, pages 1โ9. IEEE, 2016.
|
| 286 |
+
* [25] K.Simonyan and A.Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
|
| 287 |
+
* [26] K.Sohn. Improved deep metric learning with multi-class n-pair loss objective. In NeurIPS, pages 1857โ1865, 2016.
|
| 288 |
+
* [27] Y.Sun, C.Cheng, Y.Zhang, C.Zhang, L.Zheng, Z.Wang, and Y.Wei. Circle loss: A unified perspective of pair similarity optimization. In CVPR, pages 6398โ6407, 2020.
|
| 289 |
+
* [28] Y.Sun, X.Wang, and X.Tang. Deeply learned face representations are sparse, selective, and robust. In CVPR, pages 2892โ2900, 2015.
|
| 290 |
+
* [29] C.Szegedy, W.Liu, Y.Jia, P.Sermanet, S.Reed, D.Anguelov, D.Erhan, V.Vanhoucke, and A.Rabinovich. Going deeper with convolutions. In CVPR, pages 1โ9, 2015.
|
| 291 |
+
* [30] Y.Taigman, M.Yang, M.Ranzato, and L.Wolf. Deepface: Closing the gap to human-level performance in face verification. In CVPR, pages 1701โ1708, 2014.
|
| 292 |
+
* [31] J.W. Tukey et al. Exploratory data analysis, volume 2. Reading, Mass., 1977.
|
| 293 |
+
* [32] F.Wang, J.Cheng, W.Liu, and H.Liu. Additive margin softmax for face verification. IEEE Sign. Process. Letters, 25(7):926โ930, 2018.
|
| 294 |
+
* [33] F.Wang, X.Xiang, J.Cheng, and A.L. Yuille. Normface: L2 hypersphere embedding for face verification. In ACM MM, pages 1041โ1049, 2017.
|
| 295 |
+
* [34] H.Wang, Y.Wang, Z.Zhou, X.Ji, D.Gong, J.Zhou, Z.Li, and W.Liu. Cosface: Large margin cosine loss for deep face recognition. In CVPR, pages 5265โ5274, 2018.
|
| 296 |
+
* [35] X.Wang, X.Han, W.Huang, D.Dong, and M.R. Scott. Multi-similarity loss with general pair weighting for deep metric learning. In CVPR, pages 5022โ5030, 2019.
|
| 297 |
+
* [36] Y.Wen, K.Zhang, Z.Li, and Y.Qiao. A discriminative feature learning approach for deep face recognition. In ECCV, pages 499โ515. Springer, 2016.
|
| 298 |
+
* [37] C.Whitelam, E.Taborsky, A.Blanton, B.Maze, J.Adams, T.Miller, N.Kalka, A.K. Jain, J.A. Duncan, K.Allen, et al. Iarpa janus benchmark-b face dataset. In CVPRW, pages 90โ98, 2017.
|
| 299 |
+
* [38] D.Yi, Z.Lei, S.Liao, and S.Z. Li. Learning face representation from scratch. arXiv preprint arXiv:1411.7923, 2014.
|
| 300 |
+
* [39] B.Yu and D.Tao. Deep metric learning with tuplet margin loss. In CVPR, pages 6490โ6499, 2019.
|
| 301 |
+
* [40] X.Zhang, R.Zhao, Y.Qiao, X.Wang, and H.Li. Adacos: Adaptively scaling cosine logits for effectively learning deep face representations. In CVPR, pages 10823โ10832, 2019.
|
| 302 |
+
* [41] T.Zheng, W.Deng, and J.Hu. Cross-age lfw: A database for studying cross-age face recognition in unconstrained environments. arXiv preprint arXiv:1708.08197, 2017.
|