prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the PIC - MagFace model in the PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric (Face) Recognition paper on the MORPH dataset? | FNMR [%] @ 10-3 FMR |
What metrics were used to measure the PIC - QMagFace model in the PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric (Face) Recognition paper on the MORPH dataset? | FNMR [%] @ 10-3 FMR |
What metrics were used to measure the EdgeFace - S (g=0.5) (ours) model in the EdgeFace: Efficient Face Recognition Model for Edge Devices paper on the IJB-C dataset? | TAR @ FAR=0.01 |
What metrics were used to measure the EdgeFace - XS (g=0.6) (ours) model in the EdgeFace: Efficient Face Recognition Model for Edge Devices paper on the IJB-C dataset? | TAR @ FAR=0.01 |
What metrics were used to measure the PIC - MagFace model in the PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric (Face) Recognition paper on the Adience dataset? | FNMR [%] @ 10-3 FMR |
What metrics were used to measure the PIC - QMagFace model in the PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric (Face) Recognition paper on the Adience dataset? | FNMR [%] @ 10-3 FMR |
What metrics were used to measure the PIC - ArcFace model in the PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric (Face) Recognition paper on the Adience dataset? | FNMR [%] @ 10-3 FMR |
What metrics were used to measure the FaceTransformer+OctupletLoss model in the Octuplet Loss: Make Face Recognition Robust to Image Resolution paper on the XQLFW dataset? | Accuracy |
What metrics were used to measure the MS1MV2, R100, SFace model in the MLFW: A Database for Face Recognition on Masked Faces paper on the MLFW dataset? | Accuracy |
What metrics were used to measure the MS1MV2, R100, Arcface model in the MLFW: A Database for Face Recognition on Masked Faces paper on the MLFW dataset? | Accuracy |
What metrics were used to measure the MS1MV2, R100, Curricularface model in the MLFW: A Database for Face Recognition on Masked Faces paper on the MLFW dataset? | Accuracy |
What metrics were used to measure the VGGFace2, R50, ArcFace model in the MLFW: A Database for Face Recognition on Masked Faces paper on the MLFW dataset? | Accuracy |
What metrics were used to measure the CASIA-WebFace, R50, CosFace model in the MLFW: A Database for Face Recognition on Masked Faces paper on the MLFW dataset? | Accuracy |
What metrics were used to measure the Private-Asia, R50, ArcFace model in the MLFW: A Database for Face Recognition on Masked Faces paper on the MLFW dataset? | Accuracy |
What metrics were used to measure the Model with Up Convolution + DoG Filter model in the Thermal to Visible Face Recognition Using Deep Autoencoders paper on the UND-X1 dataset? | Rank-1 |
What metrics were used to measure the DPM model in the Deep Perceptual Mapping for Cross-Modal Face Recognition paper on the UND-X1 dataset? | Rank-1 |
What metrics were used to measure the FaceNet+Adaptive Threshold model in the Data-specific Adaptive Threshold for Face Recognition and Authentication paper on the Adience (Online Open Set) dataset? | Average Accuracy (10 times) |
What metrics were used to measure the FaceNet+Fixed Threshold (0.2487) model in the Data-specific Adaptive Threshold for Face Recognition and Authentication paper on the Adience (Online Open Set) dataset? | Average Accuracy (10 times) |
What metrics were used to measure the Partial FC model in the Killing Two Birds with One Stone:Efficient and Robust Training of Face Recognition CNNs by Partial FC paper on the MFR dataset? | MFR-ALL, MFR-MASK, African, Caucasian, South Asian, East Asian |
What metrics were used to measure the GhostFaceNetV2-1 model in the GhostFaceNets: Lightweight Face Recognition Model From Cheap Operations paper on the CFP-FP dataset? | Accuracy |
What metrics were used to measure the ElasticFace-Arc model in the ElasticFace: Elastic Margin Loss for Deep Face Recognition paper on the CFP-FP dataset? | Accuracy |
What metrics were used to measure the DiscFace model in the DiscFace: Minimum Discrepancy Learning for Deep Face Recognition paper on the CFP-FP dataset? | Accuracy |
What metrics were used to measure the CircleLoss(ours) model in the Circle Loss: A Unified Perspective of Pair Similarity Optimization paper on the CFP-FP dataset? | Accuracy |
What metrics were used to measure the DCQ model in the Dynamic Class Queue for Large Scale Face Recognition In the Wild paper on the CFP-FP dataset? | Accuracy |
What metrics were used to measure the DigiFace-1M model in the DigiFace-1M: 1 Million Digital Face Images for Face Recognition paper on the CFP-FP dataset? | Accuracy |
What metrics were used to measure the QMagFace model in the QMagFace: Simple and Accurate Quality-Aware Face Recognition paper on the CFP-FP dataset? | Accuracy |
What metrics were used to measure the Prodpoly model in the Deep Polynomial Neural Networks paper on the AgeDB-30 dataset? | Accuracy |
What metrics were used to measure the ElasticFace-Cos model in the ElasticFace: Elastic Margin Loss for Deep Face Recognition paper on the AgeDB-30 dataset? | Accuracy |
What metrics were used to measure the DCQ model in the Dynamic Class Queue for Large Scale Face Recognition In the Wild paper on the AgeDB-30 dataset? | Accuracy |
What metrics were used to measure the EdgeFace - S (g=0.5) (ours) model in the EdgeFace: Efficient Face Recognition Model for Edge Devices paper on the AgeDB-30 dataset? | Accuracy |
What metrics were used to measure the EdgeFace - XS (g=0.6) (ours) model in the EdgeFace: Efficient Face Recognition Model for Edge Devices paper on the AgeDB-30 dataset? | Accuracy |
What metrics were used to measure the Model with Up Convolution + DoG Filter (Aligned) model in the Thermal to Visible Face Recognition Using Deep Autoencoders paper on the Carl dataset? | Rank-1 |
What metrics were used to measure the DPM model in the Deep Perceptual Mapping for Cross-Modal Face Recognition paper on the Carl dataset? | Rank-1 |
What metrics were used to measure the Fine-tuned ArcFace model in the A realistic approach to generate masked faces applied on two novel masked face recognition data sets paper on the CASIA-WebFace+masks dataset? | Accuracy |
What metrics were used to measure the Fine-tuned FaceNet model in the A realistic approach to generate masked faces applied on two novel masked face recognition data sets paper on the CASIA-WebFace+masks dataset? | Accuracy |
What metrics were used to measure the ArcFace model in the ArcFace: Additive Angular Margin Loss for Deep Face Recognition paper on the CASIA-WebFace+masks dataset? | Accuracy |
What metrics were used to measure the Fine-tuned VGG-Face model in the A realistic approach to generate masked faces applied on two novel masked face recognition data sets paper on the CASIA-WebFace+masks dataset? | Accuracy |
What metrics were used to measure the FaceNet model in the FaceNet: A Unified Embedding for Face Recognition and Clustering paper on the CASIA-WebFace+masks dataset? | Accuracy |
What metrics were used to measure the VGG-Face model in the Deep Face Recognition paper on the CASIA-WebFace+masks dataset? | Accuracy |
What metrics were used to measure the D-Triplet(Resnet-50) model in the Domain-aware Triplet loss in Domain Generalization paper on the Office-Home dataset? | Average |
What metrics were used to measure the PIC - QMagFace model in the PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric (Face) Recognition paper on the Color FERET dataset? | FNMR [%] @ 10-3 FMR |
What metrics were used to measure the PIC - MagFace model in the PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric (Face) Recognition paper on the Color FERET dataset? | FNMR [%] @ 10-3 FMR |
What metrics were used to measure the PIC - ArcFace model in the PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric (Face) Recognition paper on the Color FERET dataset? | FNMR [%] @ 10-3 FMR |
What metrics were used to measure the Multi-task model in the A New Periocular Dataset Collected by Mobile Devices in Unconstrained Scenarios paper on the UHDB31 dataset? | Rank-1 |
What metrics were used to measure the Prodpoly model in the Deep Polynomial Neural Networks paper on the CALFW dataset? | Accuracy |
What metrics were used to measure the ElasticFace-Arc model in the ElasticFace: Elastic Margin Loss for Deep Face Recognition paper on the CALFW dataset? | Accuracy |
What metrics were used to measure the GhostFaceNetV2-1 model in the GhostFaceNets: Lightweight Face Recognition Model From Cheap Operations paper on the CALFW dataset? | Accuracy |
What metrics were used to measure the DigiFace-1M model in the DigiFace-1M: 1 Million Digital Face Images for Face Recognition paper on the CALFW dataset? | Accuracy |
What metrics were used to measure the DigiFace-1M model in the DigiFace-1M: 1 Million Digital Face Images for Face Recognition paper on the AgeDB dataset? | Accuracy |
What metrics were used to measure the GhostFaceNetV2-1 model in the GhostFaceNets: Lightweight Face Recognition Model From Cheap Operations paper on the CFP-FF dataset? | Accuracy |
What metrics were used to measure the GhostFaceNetV2-1 model in the GhostFaceNets: Lightweight Face Recognition Model From Cheap Operations paper on the CPLFW dataset? | Accuracy |
What metrics were used to measure the ElasticFace-Arc model in the ElasticFace: Elastic Margin Loss for Deep Face Recognition paper on the CPLFW dataset? | Accuracy |
What metrics were used to measure the DigiFace-1M model in the DigiFace-1M: 1 Million Digital Face Images for Face Recognition paper on the CPLFW dataset? | Accuracy |
What metrics were used to measure the FaceNet+Adaptive Threshold model in the Data-specific Adaptive Threshold for Face Recognition and Authentication paper on the LFW (Online Open Set) dataset? | Average Accuracy (10 times) |
What metrics were used to measure the FaceNet+Fixed Threshold (0.3779) model in the Data-specific Adaptive Threshold for Face Recognition and Authentication paper on the LFW (Online Open Set) dataset? | Average Accuracy (10 times) |
What metrics were used to measure the Fine-tuned ArcFace model in the A realistic approach to generate masked faces applied on two novel masked face recognition data sets paper on the CelebA+masks dataset? | Accuracy |
What metrics were used to measure the Fine-tuned FaceNet model in the A realistic approach to generate masked faces applied on two novel masked face recognition data sets paper on the CelebA+masks dataset? | Accuracy |
What metrics were used to measure the ArcFace model in the ArcFace: Additive Angular Margin Loss for Deep Face Recognition paper on the CelebA+masks dataset? | Accuracy |
What metrics were used to measure the Fine-tuned VGG-Face model in the A realistic approach to generate masked faces applied on two novel masked face recognition data sets paper on the CelebA+masks dataset? | Accuracy |
What metrics were used to measure the FaceNet model in the FaceNet: A Unified Embedding for Face Recognition and Clustering paper on the CelebA+masks dataset? | Accuracy |
What metrics were used to measure the VGG-Face model in the Deep Face Recognition paper on the CelebA+masks dataset? | Accuracy |
What metrics were used to measure the Sequential forward selection model in the Greedy Search for Descriptive Spatial Face Features paper on the Cohn-Kanade dataset? | Accuracy |
What metrics were used to measure the EmoAffectNet LSTM model in the In Search of a Robust Facial Expressions Recognition Model: A Large-Scale Visual Cross-Corpus Study paper on the Aff-Wild2 dataset? | UAR |
What metrics were used to measure the PAtt-Lite model in the PAtt-Lite: Lightweight Patch and Attention MobileNet for Challenging Facial Expression Recognition paper on the FER+ dataset? | Accuracy |
What metrics were used to measure the ARBEx model in the ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning paper on the FER+ dataset? | Accuracy |
What metrics were used to measure the KTN model in the Adaptively Learning Facial Expression Representation via C-F Labels and Distillation paper on the FER+ dataset? | Accuracy |
What metrics were used to measure the FER-VT model in the Facial expression recognition with grid-wise attention and visual transformer paper on the FER+ dataset? | Accuracy |
What metrics were used to measure the EAC model in the Learn From All: Erasing Attention Consistency for Noisy Label Facial Expression Recognition paper on the FER+ dataset? | Accuracy |
What metrics were used to measure the LResNet50E-IR model in the Exploring Emotion Features and Fusion Strategies for Audio-Video Emotion Recognition paper on the FER+ dataset? | Accuracy |
What metrics were used to measure the Local Learning Deep + BOW model in the Local Learning with Deep and Handcrafted Features for Facial Expression Recognition paper on the FER+ dataset? | Accuracy |
What metrics were used to measure the Ensemble with Shared Representations (ESR-9) model in the Efficient Facial Feature Learning with Wide Ensemble-based Convolutional Neural Networks paper on the FER+ dataset? | Accuracy |
What metrics were used to measure the FN2EN model in the FaceNet2ExpNet: Regularizing a Deep Face Recognition Net for Expression Recognition paper on the CK+ dataset? | Accuracy (8 emotion), Accuracy (7 emotion), Accuracy (6 emotion) |
What metrics were used to measure the PAtt-Lite model in the PAtt-Lite: Lightweight Patch and Attention MobileNet for Challenging Facial Expression Recognition paper on the CK+ dataset? | Accuracy (8 emotion), Accuracy (7 emotion), Accuracy (6 emotion) |
What metrics were used to measure the ViT + SE model in the Learning Vision Transformer with Squeeze and Excitation for Facial Expression Recognition paper on the CK+ dataset? | Accuracy (8 emotion), Accuracy (7 emotion), Accuracy (6 emotion) |
What metrics were used to measure the FAN model in the Frame attention networks for facial expression recognition in videos paper on the CK+ dataset? | Accuracy (8 emotion), Accuracy (7 emotion), Accuracy (6 emotion) |
What metrics were used to measure the Nonlinear eval on SL + SSL puzzling (B0) model in the Using Self-Supervised Auxiliary Tasks to Improve Fine-Grained Facial Representation paper on the CK+ dataset? | Accuracy (8 emotion), Accuracy (7 emotion), Accuracy (6 emotion) |
What metrics were used to measure the DeepEmotion model in the Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network paper on the CK+ dataset? | Accuracy (8 emotion), Accuracy (7 emotion), Accuracy (6 emotion) |
What metrics were used to measure the EmoAffectNet LSTM model in the In Search of a Robust Facial Expressions Recognition Model: A Large-Scale Visual Cross-Corpus Study paper on the CREMA-D dataset? | UAR |
What metrics were used to measure the Covariance Pooling model in the Covariance Pooling For Facial Expression Recognition paper on the Real-World Affective Faces dataset? | Accuracy |
What metrics were used to measure the Multi Label Output model in the Facial Emotion Recognition: A multi-task approach using deep learning paper on the Real-World Affective Faces dataset? | Accuracy |
What metrics were used to measure the PAtt-Lite model in the PAtt-Lite: Lightweight Patch and Attention MobileNet for Challenging Facial Expression Recognition paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the C-EXPR-NET model in the Multi-Label Compound Expression Recognition: C-EXPR Database & Network paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the DACL (ResNet-18) model in the Facial Expression Recognition in the Wild via Deep Attentive Center Loss paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the FaceBehaviorNet model in the Distribution Matching for Heterogeneous Multi-Task Learning: a Large-scale Face Study paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the VGG-FACE model in the Deep Neural Network Augmentation: Generating Faces for Affect Analysis paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the MixAugment model in the MixAugment & Mixup: Augmentation Methods for Facial Expression Recognition paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the MT-ArcVGG model in the Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the ARBEx model in the ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the POSTER++ model in the POSTER++: A simpler and stronger facial expression recognition network paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the APViT model in the Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the ViT-base + MAE model in the Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the EAC(ResNet-50) model in the Learn From All: Erasing Attention Consistency for Noisy Label Facial Expression Recognition paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the DAN model in the Distract Your Attention: Multi-head Cross Attention Network for Facial Expression Recognition paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the RUL (ResNet-18) model in the Relative Uncertainty Learning for Facial Expression Recognition paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the PSR model in the Pyramid With Super Resolution for In-the-Wild Facial Expression Recognition paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the EfficientFace model in the Robust Lightweight Facial Expression Recognition Network with Label Distribution Training paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the MA-Net model in the Learning Deep Global Multi-scale and Local Attention Features for Facial Expression Recognition in the Wild paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the ViT-base model in the Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the ViT-tiny model in the Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
What metrics were used to measure the Ad-Corre model in the Ad-Corre: Adaptive Correlation-Based Loss for Facial Expression Recognition in the Wild paper on the RAF-DB dataset? | Avg. Accuracy, Overall Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.