prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the RAN (ResNet-18) model in the Region Attention Networks for Pose and Occlusion Robust Facial Expression Recognition paper on the RAF-DB dataset?
Avg. Accuracy, Overall Accuracy
What metrics were used to measure the ARBEx model in the ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning paper on the JAFFE dataset?
Accuracy
What metrics were used to measure the ViT model in the Learning Vision Transformer with Squeeze and Excitation for Facial Expression Recognition paper on the JAFFE dataset?
Accuracy
What metrics were used to measure the DeepEmotion model in the Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network paper on the JAFFE dataset?
Accuracy
What metrics were used to measure the Dynamic MTL model in the Dynamic Multi-Task Learning for Face Recognition with Facial Expression paper on the Oulu-CASIA dataset?
Accuracy (10-fold)
What metrics were used to measure the PPDN model in the Peak-Piloted Deep Network for Facial Expression Recognition paper on the Oulu-CASIA dataset?
Accuracy (10-fold)
What metrics were used to measure the DeepEmotion model in the Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network paper on the FERG dataset?
Accuracy
What metrics were used to measure the ARBEx model in the ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning paper on the FERG dataset?
Accuracy
What metrics were used to measure the ARBEx model in the ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning paper on the AffWild2 dataset?
Accuracy
What metrics were used to measure the RAN (VGG16+ResNet18) model in the Region Attention Networks for Pose and Occlusion Robust Facial Expression Recognition paper on the SFEW dataset?
Accuracy
What metrics were used to measure the ViT + SE model in the Learning Vision Transformer with Squeeze and Excitation for Facial Expression Recognition paper on the SFEW dataset?
Accuracy
What metrics were used to measure the Island Loss model in the Island Loss for Learning Discriminative Features in Facial Expression Recognition paper on the SFEW dataset?
Accuracy
What metrics were used to measure the Covariance Pooling model in the Covariance Pooling For Facial Expression Recognition paper on the Static Facial Expressions in the Wild dataset?
Accuracy
What metrics were used to measure the VGG-VD-16 model in the Learning Grimaces by Watching TV paper on the Static Facial Expressions in the Wild dataset?
Accuracy
What metrics were used to measure the Facial Motion Prior Network model in the Facial Motion Prior Networks for Facial Expression Recognition paper on the MMI dataset?
Accuracy
What metrics were used to measure the Ours (VGG-F) model in the Pre-training strategies and datasets for facial representation learning paper on the BP4D dataset?
ICC
What metrics were used to measure the KTN model in the Adaptively Learning Facial Expression Representation via C-F Labels and Distillation paper on the FERPlus dataset?
Accuracy(pretrained)
What metrics were used to measure the RAN (VGG-16) model in the Region Attention Networks for Pose and Occlusion Robust Facial Expression Recognition paper on the FERPlus dataset?
Accuracy(pretrained)
What metrics were used to measure the SENet Teacher model in the Emotion Recognition in Speech using Cross-Modal Transfer in the Wild paper on the FERPlus dataset?
Accuracy(pretrained)
What metrics were used to measure the Local Learning Deep + BOW model in the Local Learning with Deep and Handcrafted Features for Facial Expression Recognition paper on the FERPlus dataset?
Accuracy(pretrained)
What metrics were used to measure the ViT + SE model in the Learning Vision Transformer with Squeeze and Excitation for Facial Expression Recognition paper on the RaFD dataset?
Accuracy
What metrics were used to measure the EmoAffectNet LSTM model in the In Search of a Robust Facial Expressions Recognition Model: A Large-Scale Visual Cross-Corpus Study paper on the RAVDESS dataset?
UAR
What metrics were used to measure the Ours (VGG-F) model in the Pre-training strategies and datasets for facial representation learning paper on the DISFA dataset?
ICC
What metrics were used to measure the ResNet50 model in the Exploring Emotion Features and Fusion Strategies for Audio-Video Emotion Recognition paper on the Acted Facial Expressions In The Wild (AFEW) dataset?
Accuracy(on validation set)
What metrics were used to measure the LResNet50E-IR (5 models with augmentation) model in the Exploring Emotion Features and Fusion Strategies for Audio-Video Emotion Recognition paper on the Acted Facial Expressions In The Wild (AFEW) dataset?
Accuracy(on validation set)
What metrics were used to measure the EAC model in the Learn From All: Erasing Attention Consistency for Noisy Label Facial Expression Recognition paper on the Acted Facial Expressions In The Wild (AFEW) dataset?
Accuracy(on validation set)
What metrics were used to measure the LResNet50E-IR (1 model with augmentation) model in the Exploring Emotion Features and Fusion Strategies for Audio-Video Emotion Recognition paper on the Acted Facial Expressions In The Wild (AFEW) dataset?
Accuracy(on validation set)
What metrics were used to measure the LResNet50E-IR (1 model) model in the Exploring Emotion Features and Fusion Strategies for Audio-Video Emotion Recognition paper on the Acted Facial Expressions In The Wild (AFEW) dataset?
Accuracy(on validation set)
What metrics were used to measure the Multi-task EfficientNet-B0 model in the Facial expression and attributes recognition based on multi-task learning of lightweight neural networks paper on the Acted Facial Expressions In The Wild (AFEW) dataset?
Accuracy(on validation set)
What metrics were used to measure the resnet18_noisy model in the Noisy Student Training using Body Language Dataset Improves Facial Expression Recognition paper on the Acted Facial Expressions In The Wild (AFEW) dataset?
Accuracy(on validation set)
What metrics were used to measure the resnet18 model in the Frame attention networks for facial expression recognition in videos paper on the Acted Facial Expressions In The Wild (AFEW) dataset?
Accuracy(on validation set)
What metrics were used to measure the POSTER++ model in the POSTER++: A simpler and stronger facial expression recognition network paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the Multi-task EfficientNet-B2 model in the Classifying emotions and engagement in online learning based on a single facial expression recognition neural network paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the MT-ArcRes model in the Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the Vit-base + MAE model in the Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the DAN model in the Distract Your Attention: Multi-head Cross Attention Network for Facial Expression Recognition paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the SL + SSL in-panting-pl (B0) model in the Using Self-Supervised Auxiliary Tasks to Improve Fine-Grained Facial Representation paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the Distilled student model in the Leveraging Recent Advances in Deep Learning for Audio-Visual Emotion Recognition paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the Multi-task EfficientNet-B0 model in the Facial expression and attributes recognition based on multi-task learning of lightweight neural networks paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the SL + SSL puzzling (B2) model in the Using Self-Supervised Auxiliary Tasks to Improve Fine-Grained Facial Representation paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the SL + SSL puzzling (B0) model in the Using Self-Supervised Auxiliary Tasks to Improve Fine-Grained Facial Representation paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the PSR (VGG-16) model in the Pyramid With Super Resolution for In-the-Wild Facial Expression Recognition paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the VGG-FACE model in the Deep Neural Network Augmentation: Generating Faces for Affect Analysis paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the SL (B2) model in the Using Self-Supervised Auxiliary Tasks to Improve Fine-Grained Facial Representation paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the SL (B0) model in the Using Self-Supervised Auxiliary Tasks to Improve Fine-Grained Facial Representation paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the MA-Net model in the Learning Deep Global Multi-scale and Local Attention Features for Facial Expression Recognition in the Wild paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the EfficientFace model in the Robust Lightweight Facial Expression Recognition Network with Label Distribution Training paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the CNNs and BOVW + local SVM model in the Local Learning with Deep and Handcrafted Features for Facial Expression Recognition paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the RAN (ResNet-18+) model in the Region Attention Networks for Pose and Occlusion Robust Facial Expression Recognition paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the Ensemble with Shared Representations (ESR-9) model in the Efficient Facial Feature Learning with Wide Ensemble-based Convolutional Neural Networks paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the ViT-tiny model in the Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the Weighted-Loss model in the AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the ViT-base model in the Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the SL+ SSL in-painting-pl + 20% train (B0) model in the Using Self-Supervised Auxiliary Tasks to Improve Fine-Grained Facial Representation paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the SL+ SSL puzzling + 20% train (B0) model in the Using Self-Supervised Auxiliary Tasks to Improve Fine-Grained Facial Representation paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the LResNet50E-IR model in the Exploring Emotion Features and Fusion Strategies for Audio-Video Emotion Recognition paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the SL + 20% train (B0) model in the Using Self-Supervised Auxiliary Tasks to Improve Fine-Grained Facial Representation paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the Emotion-GCN model in the Exploiting Emotional Dependencies with Graph Convolutional Networks for Facial Expression Recognition paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the EmoAffectNet model in the In Search of a Robust Facial Expressions Recognition Model: A Large-Scale Visual Cross-Corpus Study paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the FaceBehaviorNet model in the Distribution Matching for Heterogeneous Multi-Task Learning: a Large-scale Face Study paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the EAC model in the Learn From All: Erasing Attention Consistency for Noisy Label Facial Expression Recognition paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the PAENet model in the Increasingly Packing Multiple Facial-Informatics Modules in A Unified Deep-Learning Model via Lifelong Learning paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the DACL model in the Facial Expression Recognition in the Wild via Deep Attentive Center Loss paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the CPG model in the Compacting, Picking and Growing for Unforgetting Continual Learning paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the Ad-Corre model in the Ad-Corre: Adaptive Correlation-Based Loss for Facial Expression Recognition in the Wild paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the CAKE model in the CAKE: Compact and Accurate K-dimensional representation of Emotion paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the Facial Motion Prior Network model in the Facial Motion Prior Networks for Facial Expression Recognition paper on the AffectNet dataset?
Accuracy (8 emotion), Accuracy (7 emotion)
What metrics were used to measure the PAtt-Lite model in the PAtt-Lite: Lightweight Patch and Attention MobileNet for Challenging Facial Expression Recognition paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the Ensemble ResMaskingNet with 6 other CNNs model in the Challenges in Representation Learning: A report on three machine learning contests paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the Segmentation VGG-19 model in the A novel facial emotion recognition model using segmentation VGG-19 architecture paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the Local Learning Deep+BOW model in the Local Learning with Deep and Handcrafted Features for Facial Expression Recognition paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the LHC-Net model in the Local Multi-Head Channel Self-Attention for Facial Expression Recognition paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the Residual Masking Network model in the Challenges in Representation Learning: A report on three machine learning contests paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the ResNet18 With Tricks model in the Fer2013 Recognition - ResNet18 With Tricks paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the VGGNet model in the Facial Emotion Recognition: State of the Art Performance on FER2013 paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the VGG model in the Facial Expression Recognition using Convolutional Neural Networks: State of the Art paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the Res-Net model in the Facial Expression Recognition using Convolutional Neural Networks: State of the Art paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the CNN Hyperparameter Optimisation model in the Convolutional Neural Network Hyperparameters optimization for Facial Emotion Recognition paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the Ad-Corre model in the Ad-Corre: Adaptive Correlation-Based Loss for Facial Expression Recognition in the Wild paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the Inception model in the Facial Expression Recognition using Convolutional Neural Networks: State of the Art paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the DeepEmotion model in the Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the Local Learning BOW model in the Challenges in Representation Learning: A report on three machine learning contests paper on the FER2013 dataset?
Accuracy
What metrics were used to measure the EmoAffectNet LSTM model in the In Search of a Robust Facial Expressions Recognition Model: A Large-Scale Visual Cross-Corpus Study paper on the SAVEE dataset?
UAR
What metrics were used to measure the S_1^R model in the Pragmatically Informative Text Generation paper on the E2E NLG Challenge dataset?
BLEU, METEOR, NIST, ROUGE-L, CIDEr
What metrics were used to measure the EDA_CS model in the Copy mechanism and tailored training for character-based data-to-text generation paper on the E2E NLG Challenge dataset?
BLEU, METEOR, NIST, ROUGE-L, CIDEr
What metrics were used to measure the Slug model in the A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation paper on the E2E NLG Challenge dataset?
BLEU, METEOR, NIST, ROUGE-L, CIDEr
What metrics were used to measure the TGen model in the Findings of the E2E NLG Challenge paper on the E2E NLG Challenge dataset?
BLEU, METEOR, NIST, ROUGE-L, CIDEr
What metrics were used to measure the EDA_CS (TL) model in the Copy mechanism and tailored training for character-based data-to-text generation paper on the E2E NLG Challenge dataset?
BLEU, METEOR, NIST, ROUGE-L, CIDEr
What metrics were used to measure the Sys1-Primary model in the TNT-NLG, System 1: Using a statistical NLG to massively augment crowd-sourced data for neural generation paper on the E2E NLG Challenge dataset?
BLEU, METEOR, NIST, ROUGE-L, CIDEr
What metrics were used to measure the Zhang model in the Attention Regularized Sequence-to-Sequence Learning for E2E NLG Challenge paper on the E2E NLG Challenge dataset?
BLEU, METEOR, NIST, ROUGE-L, CIDEr
What metrics were used to measure the Gong model in the Technical Report for E2E NLG Challenge paper on the E2E NLG Challenge dataset?
BLEU, METEOR, NIST, ROUGE-L, CIDEr
What metrics were used to measure the TUDA model in the E2E NLG Challenge: Neural Models vs. Templates paper on the E2E NLG Challenge dataset?
BLEU, METEOR, NIST, ROUGE-L, CIDEr
What metrics were used to measure the Control Prefixes (T5-large) model in the Control Prefixes for Parameter-Efficient Text Generation paper on the Cleaned E2E NLG Challenge dataset?
BLEU (Test set), METEOR (Validation set)
What metrics were used to measure the DataTuner_FC model in the Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation with Semantic Fidelity paper on the Cleaned E2E NLG Challenge dataset?
BLEU (Test set), METEOR (Validation set)
What metrics were used to measure the TGen model in the Semantic Noise Matters for Neural Natural Language Generation paper on the Cleaned E2E NLG Challenge dataset?
BLEU (Test set), METEOR (Validation set)
What metrics were used to measure the LSTM model in the The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics paper on the Cleaned E2E NLG Challenge dataset?
BLEU (Test set), METEOR (Validation set)
What metrics were used to measure the TGen model in the The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics paper on the Cleaned E2E NLG Challenge dataset?
BLEU (Test set), METEOR (Validation set)
What metrics were used to measure the BART model in the The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics paper on the Cleaned E2E NLG Challenge dataset?
BLEU (Test set), METEOR (Validation set)
What metrics were used to measure the T5 model in the The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics paper on the Cleaned E2E NLG Challenge dataset?
BLEU (Test set), METEOR (Validation set)
What metrics were used to measure the Control Prefixes (A1, A2, T5-large) model in the Control Prefixes for Parameter-Efficient Text Generation paper on the WebNLG Full dataset?
BLEU