task_path
stringlengths 3
199
⌀ | dataset
stringlengths 1
128
⌀ | model_name
stringlengths 1
223
⌀ | paper_url
stringlengths 21
601
⌀ | metric_name
stringlengths 1
50
⌀ | metric_value
stringlengths 1
9.22k
⌀ |
|---|---|---|---|---|---|
Facial Recognition and Modelling > Face Detection
|
WIDER Face (Hard)
|
ACF-WIDER
|
https://arxiv.org/abs/1407.4023v2
|
AP
|
0.290
|
Facial Recognition and Modelling > Face Detection > Occluded Face Detection
|
WIDER Face (Easy)
|
TinaFace(ResNet-50)
|
https://arxiv.org/abs/2011.13183v3
|
AP
|
0.97
|
Facial Recognition and Modelling > Face Detection > Occluded Face Detection
|
WIDER Face (Medium)
|
TinaFace(ResNet-50)
|
https://arxiv.org/abs/2011.13183v3
|
AP
|
0.963
|
Facial Recognition and Modelling > Face Detection > Occluded Face Detection
|
MAFA
|
FAN
|
http://arxiv.org/abs/1711.07246v2
|
MAP
|
88.3%
|
Facial Recognition and Modelling > Face Detection > Occluded Face Detection
|
MAFA
|
AOFD
|
http://arxiv.org/abs/1709.05188v6
|
MAP
|
77.3%
|
Facial Recognition and Modelling > Face Detection > Occluded Face Detection
|
WIDER Face (Hard)
|
TinaFace(ResNet-50)
|
https://arxiv.org/abs/2011.13183v3
|
AP
|
0.934
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 8-shot learning
|
CainGAN
|
https://arxiv.org/abs/2004.09169v1
|
FID
|
24.9
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 8-shot learning
|
Few-shot Adversarial Model
|
https://arxiv.org/abs/1905.08233v2
|
FID
|
42.2
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
Fast Bi-layer Avatars (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
LPIPS
|
0.358
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
Fast Bi-layer Avatars (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
SSIM
|
0.508
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
Fast Bi-layer Avatars (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
CSIM
|
0.653
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
Fast Bi-layer Avatars (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
Normalized Pose Error
|
43.3
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
Fast Bi-layer Avatars (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
inference time (ms)
|
4
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
First Order Motion Model (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
LPIPS
|
0.311
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
First Order Motion Model (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
SSIM
|
0.553
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
First Order Motion Model (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
CSIM
|
0.638
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
First Order Motion Model (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
Normalized Pose Error
|
47.8
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
First Order Motion Model (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
inference time (ms)
|
13
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
Few-shot Vid-to-vid (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
LPIPS
|
0.368
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
Few-shot Vid-to-vid (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
SSIM
|
0.419
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
Few-shot Vid-to-vid (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
CSIM
|
0.604
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
Few-shot Vid-to-vid (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
Normalized Pose Error
|
46.1
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
Few-shot Vid-to-vid (medium size)
|
https://arxiv.org/abs/2008.10174v1
|
inference time (ms)
|
22
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
CainGAN
|
https://arxiv.org/abs/2004.09169v1
|
FID
|
35.0
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 1-shot learning
|
Few-shot Adversarial Model
|
https://arxiv.org/abs/1905.08233v2
|
FID
|
48.5
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
100 sleep nights of 8 caregivers
|
Ashok
|
https://arxiv.org/abs/1912.06078v1
|
10%
|
12
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb1 - 8-shot learning
|
Few-shot Adversarial Model
|
https://arxiv.org/abs/1905.08233v2
|
FID
|
38.0
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb1 - 8-shot learning
|
X2Face
|
http://openaccess.thecvf.com/content_ECCV_2018/html/Olivia_Wiles_X2Face_A_network_ECCV_2018_paper.html
|
FID
|
51.5
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb1 - 32-shot learning
|
Few-shot Adversarial Model
|
https://arxiv.org/abs/1905.08233v2
|
FID
|
29.5
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb1 - 32-shot learning
|
X2Face
|
http://openaccess.thecvf.com/content_ECCV_2018/html/Olivia_Wiles_X2Face_A_network_ECCV_2018_paper.html
|
FID
|
56.5
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb1 - 1-shot learning
|
Few-shot Adversarial Model
|
https://arxiv.org/abs/1905.08233v2
|
FID
|
43.0
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb1 - 1-shot learning
|
X2Face
|
http://openaccess.thecvf.com/content_ECCV_2018/html/Olivia_Wiles_X2Face_A_network_ECCV_2018_paper.html
|
FID
|
45.8
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation
|
VoxCeleb2 - 32-shot learning
|
Few-shot Adversarial Model
|
https://arxiv.org/abs/1905.08233v2
|
FID
|
30.6
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS2
|
Wav2Lip + ViT + MARLIN
|
https://arxiv.org/abs/2211.06627v3
|
LSE-D
|
7.127
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS2
|
Wav2Lip + ViT + MARLIN
|
https://arxiv.org/abs/2211.06627v3
|
LSE-C
|
5.528
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS2
|
Wav2Lip + ViT + MARLIN
|
https://arxiv.org/abs/2211.06627v3
|
FID
|
3.452
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS2
|
Wav2Lip + GAN
|
https://arxiv.org/abs/2008.10010v1
|
LSE-D
|
6.469
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS2
|
Wav2Lip + GAN
|
https://arxiv.org/abs/2008.10010v1
|
FID
|
4.446
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS2
|
Wav2Lip
|
https://arxiv.org/abs/2008.10010v1
|
LSE-D
|
6.386
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS2
|
Wav2Lip
|
https://arxiv.org/abs/2008.10010v1
|
LSE-C
|
7.781
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS2
|
Wav2Lip
|
https://arxiv.org/abs/2008.10010v1
|
FID
|
4.887
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS3
|
Wav2Lip + GAN
|
https://arxiv.org/abs/2008.10010v1
|
LSE-D
|
6.986
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS3
|
Wav2Lip + GAN
|
https://arxiv.org/abs/2008.10010v1
|
LSE-C
|
7.574
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS3
|
Wav2Lip + GAN
|
https://arxiv.org/abs/2008.10010v1
|
FID
|
4.35
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS3
|
Wav2Lip
|
https://arxiv.org/abs/2008.10010v1
|
LSE-D
|
6.652
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS3
|
Wav2Lip
|
https://arxiv.org/abs/2008.10010v1
|
LSE-C
|
7.887
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRS3
|
Wav2Lip
|
https://arxiv.org/abs/2008.10010v1
|
FID
|
4.844
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRW
|
Wav2Lip + GAN
|
https://arxiv.org/abs/2008.10010v1
|
LSE-D
|
6.774
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRW
|
Wav2Lip + GAN
|
https://arxiv.org/abs/2008.10010v1
|
LSE-C
|
7.263
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRW
|
Wav2Lip + GAN
|
https://arxiv.org/abs/2008.10010v1
|
FID
|
2.475
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRW
|
Wav2Lip
|
https://arxiv.org/abs/2008.10010v1
|
LSE-D
|
6.512
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRW
|
Wav2Lip
|
https://arxiv.org/abs/2008.10010v1
|
LSE-C
|
7.49
|
Facial Recognition and Modelling > Face Generation > Talking Head Generation > Unconstrained Lip-synchronization
|
LRW
|
Wav2Lip
|
https://arxiv.org/abs/2008.10010v1
|
FID
|
3.189
|
Facial Recognition and Modelling > Face Generation > Talking Face Generation
|
CREMA-D
|
EmoGen
|
https://arxiv.org/abs/2303.11548v2
|
EmoAcc
|
83.2
|
Facial Recognition and Modelling > Face Generation > Talking Face Generation
|
CREMA-D
|
EmoGen
|
https://arxiv.org/abs/2303.11548v2
|
FID
|
5.29
|
Facial Recognition and Modelling > Face Generation > Talking Face Generation
|
CREMA-D
|
EmoGen
|
https://arxiv.org/abs/2303.11548v2
|
LSE-C
|
6.663
|
Facial Recognition and Modelling > Face Generation > Talking Face Generation
|
LRW
|
LipGAN
|
https://arxiv.org/abs/2003.00418v1
|
LMD
|
0.60
|
Facial Recognition and Modelling > Face Generation > Talking Face Generation
|
LRW
|
LipGAN
|
https://arxiv.org/abs/2003.00418v1
|
SSIM
|
0.96
|
Facial Recognition and Modelling > Face Verification
|
AgeDB-30
|
PartialFC(R200)
|
https://arxiv.org/abs/2203.15565v1
|
Accuracy
|
0.9870
|
Facial Recognition and Modelling > Face Verification
|
AgeDB-30
|
GhostFaceNetV2-1
|
https://ieeexplore.ieee.org/document/10098610
|
Accuracy
|
0.9862
|
Facial Recognition and Modelling > Face Verification
|
AgeDB-30
|
DiscFace
|
https://openaccess.thecvf.com/content/ACCV2020/html/Kim_DiscFace_Minimum_Discrepancy_Learning_for_Deep_Face_Recognition_ACCV_2020_paper.html
|
Accuracy
|
0.9835
|
Facial Recognition and Modelling > Face Verification
|
AgeDB-30
|
VarGFaceNet
|
https://arxiv.org/abs/1910.04985v4
|
Accuracy
|
0.9815
|
Facial Recognition and Modelling > Face Verification
|
AgeDB-30
|
VarGNet
|
https://arxiv.org/abs/1907.05653v2
|
Accuracy
|
0.97333
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
Dual-Agent GANs
|
http://papers.nips.cc/paper/6612-dual-agent-gans-for-photorealistic-and-identity-preserving-profile-face-synthesis
|
TAR @ FAR=0.01
|
97.60%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
PFEfuse + match
|
https://arxiv.org/abs/1904.09658v4
|
TAR @ FAR=0.01
|
97.5%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
PFEfuse + match
|
https://arxiv.org/abs/1904.09658v4
|
TAR @ FAR=0.001
|
95.25
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
SE-GV-4-g1
|
http://arxiv.org/abs/1810.09951v1
|
TAR @ FAR=0.01
|
97.2%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
L2-constrained softmax loss
|
http://arxiv.org/abs/1703.09507v3
|
TAR @ FAR=0.01
|
97%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
VGGFace2_ft
|
http://arxiv.org/abs/1710.08092v2
|
TAR @ FAR=0.01
|
96.8%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
VGGFace2_ft
|
http://arxiv.org/abs/1710.08092v2
|
TAR @ FAR=0.001
|
92.1
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
VGGFace2_ft
|
http://arxiv.org/abs/1710.08092v2
|
TAR @ FAR=0.1
|
0.99
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
StyleFNM
|
https://arxiv.org/abs/2312.14544v1
|
TAR @ FAR=0.01
|
94.60%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
Deep Residual Equivariant Mapping
|
http://arxiv.org/abs/1803.00839v1
|
TAR @ FAR=0.01
|
94.40%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
NAN
|
http://arxiv.org/abs/1603.05474v4
|
TAR @ FAR=0.01
|
94.10%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
Template adaptation
|
http://arxiv.org/abs/1603.03958v3
|
TAR @ FAR=0.01
|
93.90%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
All-in-one CNN
|
http://arxiv.org/abs/1611.00851v1
|
TAR @ FAR=0.01
|
92.20%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
FPN
|
http://arxiv.org/abs/1708.07517v2
|
TAR @ FAR=0.01
|
90.1%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
Triplet probabilistic embedding
|
http://arxiv.org/abs/1604.05417v3
|
TAR @ FAR=0.01
|
90%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
Synthesis as data augmentation
|
http://arxiv.org/abs/1603.07057v2
|
TAR @ FAR=0.01
|
88.60%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
DCNN
|
http://arxiv.org/abs/1508.01722v2
|
TAR @ FAR=0.01
|
83.80%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
Deep multi-pose representations
|
http://arxiv.org/abs/1603.07388v1
|
TAR @ FAR=0.01
|
78.70%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
Deep CNN + COTS matcher
|
http://arxiv.org/abs/1507.07242v2
|
TAR @ FAR=0.01
|
73.30%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
VGG + GANFaces
|
http://arxiv.org/abs/1804.03675v1
|
TAR @ FAR=0.01
|
53.507%
|
Facial Recognition and Modelling > Face Verification
|
IJB-A
|
VGG + GANFaces
|
http://arxiv.org/abs/1804.03675v1
|
TAR @ FAR=0.001
|
18.768
|
Facial Recognition and Modelling > Face Verification
|
CFP-FP
|
PartialFC (R200)
|
https://arxiv.org/abs/2203.15565v1
|
Accuracy
|
0.9951
|
Facial Recognition and Modelling > Face Verification
|
CFP-FP
|
QMagFace
|
https://arxiv.org/abs/2111.13475v3
|
Accuracy
|
0.9874
|
Facial Recognition and Modelling > Face Verification
|
CFP-FP
|
VarGFaceNet
|
https://arxiv.org/abs/1910.04985v4
|
Accuracy
|
0.985
|
Facial Recognition and Modelling > Face Verification
|
CFP-FP
|
VarGNet
|
https://arxiv.org/abs/1907.05653v2
|
Accuracy
|
0.89829
|
Facial Recognition and Modelling > Face Verification
|
CALFW
|
DiscFace
|
https://openaccess.thecvf.com/content/ACCV2020/html/Kim_DiscFace_Minimum_Discrepancy_Learning_for_Deep_Face_Recognition_ACCV_2020_paper.html
|
Accuracy
|
96.15
|
Facial Recognition and Modelling > Face Verification
|
CALFW
|
SFace
|
https://arxiv.org/abs/2205.12010v1
|
Accuracy
|
93.95%
|
Facial Recognition and Modelling > Face Verification
|
LFW
|
ArcFaceR50 + EM-FRR
|
https://arxiv.org/abs/2210.13664v3
|
FRR@FAR(%)
|
0.100
|
Facial Recognition and Modelling > Face Verification
|
LFW
|
ArcFaceR50 + EM-FRR
|
https://arxiv.org/abs/2210.13664v3
|
BFRR
|
5.89
|
Facial Recognition and Modelling > Face Verification
|
LFW
|
ArcFaceR50 + EM-FRR
|
https://arxiv.org/abs/2210.13664v3
|
BFAR
|
33.65
|
Facial Recognition and Modelling > Face Verification
|
LFW
|
ArcFaceR50 + EM-C
|
https://arxiv.org/abs/2210.13664v3
|
FRR@FAR(%)
|
0.164
|
Facial Recognition and Modelling > Face Verification
|
LFW
|
ArcFaceR50 + EM-C
|
https://arxiv.org/abs/2210.13664v3
|
BFRR
|
9.18
|
Facial Recognition and Modelling > Face Verification
|
LFW
|
ArcFaceR50 + EM-C
|
https://arxiv.org/abs/2210.13664v3
|
BFAR
|
2.44
|
Facial Recognition and Modelling > Face Verification
|
LFW
|
ArcFaceR50 + EM-FAR
|
https://arxiv.org/abs/2210.13664v3
|
FRR@FAR(%)
|
0.151
|
Facial Recognition and Modelling > Face Verification
|
LFW
|
ArcFaceR50 + EM-FAR
|
https://arxiv.org/abs/2210.13664v3
|
BFRR
|
11.22
|
Facial Recognition and Modelling > Face Verification
|
LFW
|
ArcFaceR50 + EM-FAR
|
https://arxiv.org/abs/2210.13664v3
|
BFAR
|
2.11
|
Facial Recognition and Modelling > Face Verification
|
IJB-C
|
HeadSharing: SH-KD
|
https://arxiv.org/abs/2201.06945v2
|
TAR @ FAR=1e-4
|
95.64%
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.