prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the CenterFace model in the CenterFace: Joint Face Detection and Alignment Using Face as Point paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the Massively-large receptive fields model in the Finding Tiny Faces paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the EXTD model in the EXTD: Extremely Tiny Face Detector via Iterative Filter Reuse paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the RNNPool-Face-C model in the RNNPool: Efficient Non-linear Pooling for RAM Constrained Inference paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the img2pose model in the img2pose: Face Alignment and Detection via 6DoF, Face Pose Estimation paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the SCRFD-0.5GF model in the Sample and Computation Redistribution for Efficient Face Detection paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the CMS-RCNN model in the CMS-RCNN: Contextual Multi-Scale Region-based CNN for Unconstrained Face Detection paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the LFFD model in the LFFD: A Light and Fast Face Detector for Edge Devices paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the Multitask Cascade CNN model in the Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the LDCF+ model in the To Boost or Not to Boost? On the Limits of Boosted Trees for Object Detection paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the Multiscale Cascade CNN model in the WIDER FACE: A Face Detection Benchmark paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the Faceness-WIDER model in the WIDER FACE: A Face Detection Benchmark paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the Two-stage CNN model in the WIDER FACE: A Face Detection Benchmark paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the ACF-WIDER model in the Aggregate channel features for multi-view face detection paper on the WIDER Face (Medium) dataset?
AP
What metrics were used to measure the QMagFace model in the QMagFace: Simple and Accurate Quality-Aware Face Recognition paper on the IJB-B dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR@FAR=0.0001, TAR @ FAR=1e-5, TAR @ FAR=0.0001
What metrics were used to measure the Arc+UNPG model in the Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition paper on the IJB-B dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR@FAR=0.0001, TAR @ FAR=1e-5, TAR @ FAR=0.0001
What metrics were used to measure the Mag+UNPG model in the Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition paper on the IJB-B dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR@FAR=0.0001, TAR @ FAR=1e-5, TAR @ FAR=0.0001
What metrics were used to measure the Cos+UNPG model in the Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition paper on the IJB-B dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR@FAR=0.0001, TAR @ FAR=1e-5, TAR @ FAR=0.0001
What metrics were used to measure the FPN model in the FacePoseNet: Making a Case for Landmark-Free Face Alignment paper on the IJB-B dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR@FAR=0.0001, TAR @ FAR=1e-5, TAR @ FAR=0.0001
What metrics were used to measure the SE-GV-3-g2 model in the GhostVLAD for set-based face recognition paper on the IJB-B dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR@FAR=0.0001, TAR @ FAR=1e-5, TAR @ FAR=0.0001
What metrics were used to measure the VGGFace2_ft model in the VGGFace2: A dataset for recognising faces across pose and age paper on the IJB-B dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR@FAR=0.0001, TAR @ FAR=1e-5, TAR @ FAR=0.0001
What metrics were used to measure the CAFace+AdaFace (WebFace4M) model in the Cluster and Aggregate: Face Recognition with Large Probe Set paper on the IJB-B dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR@FAR=0.0001, TAR @ FAR=1e-5, TAR @ FAR=0.0001
What metrics were used to measure the PartialFC(WebFace42M) model in the Killing Two Birds with One Stone:Efficient and Robust Training of Face Recognition CNNs by Partial FC paper on the IJB-B dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR@FAR=0.0001, TAR @ FAR=1e-5, TAR @ FAR=0.0001
What metrics were used to measure the AdaFace (WebFace4M) model in the AdaFace: Quality Adaptive Margin for Face Recognition paper on the IJB-B dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR@FAR=0.0001, TAR @ FAR=1e-5, TAR @ FAR=0.0001
What metrics were used to measure the AdaFace (MS1MV3) model in the AdaFace: Quality Adaptive Margin for Face Recognition paper on the IJB-B dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR@FAR=0.0001, TAR @ FAR=1e-5, TAR @ FAR=0.0001
What metrics were used to measure the AdaFace (MS1MV2) model in the AdaFace: Quality Adaptive Margin for Face Recognition paper on the IJB-B dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR@FAR=0.0001, TAR @ FAR=1e-5, TAR @ FAR=0.0001
What metrics were used to measure the LightCNN-29 + DVG model in the Dual Variational Generation for Low-Shot Heterogeneous Face Recognition paper on the CASIA NIR-VIS 2.0 dataset?
TAR @ FAR=0.001
What metrics were used to measure the DVR Wu et al. (2019) model in the Disentangled Variational Representation for Heterogeneous Face Recognition paper on the CASIA NIR-VIS 2.0 dataset?
TAR @ FAR=0.001
What metrics were used to measure the W-CNN He et al. (2018) model in the Wasserstein CNN: Learning Invariant Features for NIR-VIS Face Recognition paper on the CASIA NIR-VIS 2.0 dataset?
TAR @ FAR=0.001
What metrics were used to measure the DiscFace model in the DiscFace: Minimum Discrepancy Learning for Deep Face Recognition paper on the CALFW dataset?
Accuracy
What metrics were used to measure the SFace model in the SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition paper on the CALFW dataset?
Accuracy
What metrics were used to measure the Dual-Agent GANs model in the Dual-Agent GANs for Photorealistic and Identity Preserving Profile Face Synthesis paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the PFEfuse + match model in the Probabilistic Face Embeddings paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the SE-GV-4-g1 model in the GhostVLAD for set-based face recognition paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the L2-constrained softmax loss model in the L2-constrained Softmax Loss for Discriminative Face Verification paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the VGGFace2_ft model in the VGGFace2: A dataset for recognising faces across pose and age paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the Deep Residual Equivariant Mapping model in the Pose-Robust Face Recognition via Deep Residual Equivariant Mapping paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the NAN model in the Neural Aggregation Network for Video Face Recognition paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the Template adaptation model in the Template Adaptation for Face Verification and Identification paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the All-in-one CNN model in the An All-In-One Convolutional Neural Network for Face Analysis paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the FPN model in the FacePoseNet: Making a Case for Landmark-Free Face Alignment paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the Triplet probabilistic embedding model in the Triplet Probabilistic Embedding for Face Verification and Clustering paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the Synthesis as data augmentation model in the Do We Really Need to Collect Millions of Faces for Effective Face Recognition? paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the DCNN model in the Unconstrained Face Verification using Deep CNN Features paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the Deep multi-pose representations model in the Face Recognition Using Deep Multi-Pose Representations paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the Deep CNN + COTS matcher model in the Face Search at Scale: 80 Million Gallery paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the VGG + GANFaces model in the Semi-supervised Adversarial Learning to Generate Photorealistic Face Images of New Identities from 3D Morphable Model paper on the IJB-A dataset?
TAR @ FAR=0.01, TAR @ FAR=0.001, TAR @ FAR=0.1
What metrics were used to measure the LightCNN-29 + DVG model in the Dual Variational Generation for Low-Shot Heterogeneous Face Recognition paper on the BUAA-VisNir dataset?
TAR @ FAR=0.001, TAR @ FAR=0.01
What metrics were used to measure the DVR Wu et al. (2019) model in the Disentangled Variational Representation for Heterogeneous Face Recognition paper on the BUAA-VisNir dataset?
TAR @ FAR=0.001, TAR @ FAR=0.01
What metrics were used to measure the W-CNN He et al. (2018) model in the Wasserstein CNN: Learning Invariant Features for NIR-VIS Face Recognition paper on the BUAA-VisNir dataset?
TAR @ FAR=0.001, TAR @ FAR=0.01
What metrics were used to measure the SphereFace model in the SphereFace: Deep Hypersphere Embedding for Face Recognition paper on the CK+ dataset?
Accuracy
What metrics were used to measure the LightCNN-29 + DVG model in the Dual Variational Generation for Low-Shot Heterogeneous Face Recognition paper on the Oulu-CASIA NIR-VIS dataset?
TAR @ FAR=0.001, TAR @ FAR=0.01
What metrics were used to measure the DVR Wu et al. (2019) model in the Disentangled Variational Representation for Heterogeneous Face Recognition paper on the Oulu-CASIA NIR-VIS dataset?
TAR @ FAR=0.001, TAR @ FAR=0.01
What metrics were used to measure the W-CNN He et al. (2018) model in the Wasserstein CNN: Learning Invariant Features for NIR-VIS Face Recognition paper on the Oulu-CASIA NIR-VIS dataset?
TAR @ FAR=0.001, TAR @ FAR=0.01
What metrics were used to measure the PartialFC (R200) model in the Killing Two Birds with One Stone:Efficient and Robust Training of Face Recognition CNNs by Partial FC paper on the CFP-FP dataset?
Accuracy
What metrics were used to measure the QMagFace model in the QMagFace: Simple and Accurate Quality-Aware Face Recognition paper on the CFP-FP dataset?
Accuracy
What metrics were used to measure the VarGFaceNet model in the VarGFaceNet: An Efficient Variable Group Convolutional Neural Network for Lightweight Face Recognition paper on the CFP-FP dataset?
Accuracy
What metrics were used to measure the Seesaw-shuffleFaceNet (mobi) model in the SeesawFaceNets: sparse and robust face verification model for mobile platform paper on the CFP-FP dataset?
Accuracy
What metrics were used to measure the VarGNet model in the VarGNet: Variable Group Convolutional Neural Network for Efficient Embedded Computing paper on the CFP-FP dataset?
Accuracy
What metrics were used to measure the DiscFace model in the DiscFace: Minimum Discrepancy Learning for Deep Face Recognition paper on the CPLFW dataset?
Accuracy
What metrics were used to measure the SFace model in the SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition paper on the CPLFW dataset?
Accuracy
What metrics were used to measure the ArcFaceR50 + EM-FRR model in the Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher Mixture Model paper on the LFW dataset?
BFAR, BFRR, FRR@FAR(%)
What metrics were used to measure the ArcFaceR50 + EM-C model in the Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher Mixture Model paper on the LFW dataset?
BFAR, BFRR, FRR@FAR(%)
What metrics were used to measure the ArcFaceR50 + EM-FAR model in the Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher Mixture Model paper on the LFW dataset?
BFAR, BFRR, FRR@FAR(%)
What metrics were used to measure the Prodpoly model in the Deep Polynomial Neural Networks paper on the MegaFace dataset?
Accuracy
What metrics were used to measure the ElasticFace-Arc model in the ElasticFace: Elastic Margin Loss for Deep Face Recognition paper on the MegaFace dataset?
Accuracy
What metrics were used to measure the GhostFaceNetV2-1 model in the GhostFaceNets: Lightweight Face Recognition Model From Cheap Operations paper on the MegaFace dataset?
Accuracy
What metrics were used to measure the ArcFace + MS1MV2 + R100 + R model in the ArcFace: Additive Angular Margin Loss for Deep Face Recognition paper on the MegaFace dataset?
Accuracy
What metrics were used to measure the DiscFace model in the DiscFace: Minimum Discrepancy Learning for Deep Face Recognition paper on the MegaFace dataset?
Accuracy
What metrics were used to measure the Dynamic AdaCos model in the AdaCos: Adaptively Scaling Cosine Logits for Effectively Learning Deep Face Representations paper on the MegaFace dataset?
Accuracy
What metrics were used to measure the SV-AM-Softmax model in the Support Vector Guided Softmax Loss for Face Recognition paper on the MegaFace dataset?
Accuracy
What metrics were used to measure the CosFace model in the CosFace: Large Margin Cosine Loss for Deep Face Recognition paper on the MegaFace dataset?
Accuracy
What metrics were used to measure the PFEfuse + match model in the Probabilistic Face Embeddings paper on the MegaFace dataset?
Accuracy
What metrics were used to measure the SphereFace (3-patch ensemble) model in the SphereFace: Deep Hypersphere Embedding for Face Recognition paper on the MegaFace dataset?
Accuracy
What metrics were used to measure the SphereFace (single model) model in the SphereFace: Deep Hypersphere Embedding for Face Recognition paper on the MegaFace dataset?
Accuracy
What metrics were used to measure the Light CNN-29 model in the A Light CNN for Deep Face Representation with Noisy Labels paper on the MegaFace dataset?
Accuracy
What metrics were used to measure the PartialFC(R200) model in the Killing Two Birds with One Stone:Efficient and Robust Training of Face Recognition CNNs by Partial FC paper on the AgeDB-30 dataset?
Accuracy
What metrics were used to measure the GhostFaceNetV2-1 model in the GhostFaceNets: Lightweight Face Recognition Model From Cheap Operations paper on the AgeDB-30 dataset?
Accuracy
What metrics were used to measure the DiscFace model in the DiscFace: Minimum Discrepancy Learning for Deep Face Recognition paper on the AgeDB-30 dataset?
Accuracy
What metrics were used to measure the VarGFaceNet model in the VarGFaceNet: An Efficient Variable Group Convolutional Neural Network for Lightweight Face Recognition paper on the AgeDB-30 dataset?
Accuracy
What metrics were used to measure the VarGNet model in the VarGNet: Variable Group Convolutional Neural Network for Efficient Embedded Computing paper on the AgeDB-30 dataset?
Accuracy
What metrics were used to measure the Seesaw-shuffleFaceNet model in the SeesawFaceNets: sparse and robust face verification model for mobile platform paper on the AgeDB-30 dataset?
Accuracy
What metrics were used to measure the HeadSharing: SH-KD model in the It's All in the Head: Representation Knowledge Distillation through Classifier Sharing paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the HeadSharing: TH-KD model in the It's All in the Head: Representation Knowledge Distillation through Classifier Sharing paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the ArcFace+CSFM model in the Controllable and Guided Face Synthesis for Unconstrained Face Recognition paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the Partial FC model in the Killing Two Birds with One Stone:Efficient and Robust Training of Face Recognition CNNs by Partial FC paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the PartialFC model in the Killing Two Birds with One Stone:Efficient and Robust Training of Face Recognition CNNs by Partial FC paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the ArcFace model in the ArcFace: Additive Angular Margin Loss for Deep Face Recognition paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the Mag+UNPG model in the Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the Cos+UNPG model in the Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the L2E+IS-sampling model in the Rectifying the Data Bias in Knowledge Distillation paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the MagFace++ model in the MagFace: A Universal Representation for Face Recognition and Quality Assessment paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the circle loss model in the Circle Loss: A Unified Perspective of Pair Similarity Optimization paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the WebFace42M baseline model in the WebFace260M: A Benchmark Unveiling the Power of Million-Scale Deep Face Recognition paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the AdaFace (WebFace4M) model in the AdaFace: Quality Adaptive Margin for Face Recognition paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the FFC model in the An Efficient Training Approach for Very Large Scale Face Recognition paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the CAFace+AdaFace (WebFace4M) model in the Cluster and Aggregate: Face Recognition with Large Probe Set paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the AdaFace (MS1MV3) model in the AdaFace: Quality Adaptive Margin for Face Recognition paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the AdaFace (MS1MV2) model in the AdaFace: Quality Adaptive Margin for Face Recognition paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5
What metrics were used to measure the ElasticFace-Cos model in the ElasticFace: Elastic Margin Loss for Deep Face Recognition paper on the IJB-C dataset?
TAR @ FAR=1e-6, TAR @ FAR=1e-5, TAR @ FAR=1e-4, TAR @ FAR=1e-3, TAR @ FAR=1e-2, training dataset, model, Rank-1, Rank-5