prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the MedAIR model in the PEg TRAnsfer Workflow recognition challenge report: Does multi-modal data improve recognition? paper on the PETRAW dataset? | Average AD-Accuracy |
What metrics were used to measure the NCC Next model in the PEg TRAnsfer Workflow recognition challenge report: Does multi-modal data improve recognition? paper on the PETRAW dataset? | Average AD-Accuracy |
What metrics were used to measure the MediCIS model in the PEg TRAnsfer Workflow recognition challenge report: Does multi-modal data improve recognition? paper on the PETRAW dataset? | Average AD-Accuracy |
What metrics were used to measure the SK model in the PEg TRAnsfer Workflow recognition challenge report: Does multi-modal data improve recognition? paper on the PETRAW dataset? | Average AD-Accuracy |
What metrics were used to measure the JHU-CIRL model in the PEg TRAnsfer Workflow recognition challenge report: Does multi-modal data improve recognition? paper on the PETRAW dataset? | Average AD-Accuracy |
What metrics were used to measure the Hutom model in the PEg TRAnsfer Workflow recognition challenge report: Does multi-modal data improve recognition? paper on the PETRAW dataset? | Average AD-Accuracy |
What metrics were used to measure the OPT-13b model in the KAMEL : Knowledge Analysis with Multitoken Entities in Language Models paper on the KAMEL dataset? | Average F1 |
What metrics were used to measure the Semi-supervision model in the Modeling Label Semantics for Predicting Emotional Reactions paper on the ROCStories dataset? | F1 |
What metrics were used to measure the NPN + Explanation Training model in the Modeling Naive Psychology of Characters in Simple Commonsense Stories paper on the ROCStories dataset? | F1 |
What metrics were used to measure the SpanEmo model in the SpanEmo: Casting Multi-label Emotion Classification as Span-prediction paper on the SemEval 2018 Task 1E-c dataset? | Accuracy, Micro-F1, Macro-F1 |
What metrics were used to measure the BERT+DK model in the Improving Multi-label Emotion Classification by Integrating both General and Domain-specific Knowledge paper on the SemEval 2018 Task 1E-c dataset? | Accuracy, Micro-F1, Macro-F1 |
What metrics were used to measure the BERT-GCN model in the EmoGraph: Capturing Emotion Correlations using Graph Networks paper on the SemEval 2018 Task 1E-c dataset? | Accuracy, Micro-F1, Macro-F1 |
What metrics were used to measure the Transformer (finetune) model in the Practical Text Classification With Large Pre-Trained Language Models paper on the SemEval 2018 Task 1E-c dataset? | Accuracy, Micro-F1, Macro-F1 |
What metrics were used to measure the ProxEmo (ours) model in the ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for Socially-Aware Robot Navigation paper on the EWALK dataset? | Accuracy |
What metrics were used to measure the STEP [bhattacharya2019step] model in the ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for Socially-Aware Robot Navigation paper on the EWALK dataset? | Accuracy |
What metrics were used to measure the Baseline (Vanilla LSTM) [Ewalk] model in the ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for Socially-Aware Robot Navigation paper on the EWALK dataset? | Accuracy |
What metrics were used to measure the ERANN-0-4 model in the ERANNs: Efficient Residual Audio Neural Networks for Audio Pattern Recognition paper on the RAVDESS dataset? | Top-1 Accuracy |
What metrics were used to measure the BERT model in the GoEmotions: A Dataset of Fine-Grained Emotions paper on the GoEmotions dataset? | Average F1 |
What metrics were used to measure the MARLIN (ViT-L) model in the MARLIN: Masked Autoencoder for facial video Representation LearnINg paper on the CMU-MOSEI dataset? | Accuracy |
What metrics were used to measure the MARLIN (ViT-B) model in the MARLIN: Masked Autoencoder for facial video Representation LearnINg paper on the CMU-MOSEI dataset? | Accuracy |
What metrics were used to measure the MARLIN (ViT-S) model in the MARLIN: Masked Autoencoder for facial video Representation LearnINg paper on the CMU-MOSEI dataset? | Accuracy |
What metrics were used to measure the MLKNN model in the The Many Faces of Anger: A Multicultural Video Dataset of Negative Emotions in the Wild (MFA-Wild) paper on the MFA dataset? | F-F1 score (Comb.), F-F1 score (Persian), V-F1 score (Comb.), V-F1 score (NA), F-F1 score (NA), V-F1 score (Persian) |
What metrics were used to measure the CC - XGB model in the The Many Faces of Anger: A Multicultural Video Dataset of Negative Emotions in the Wild (MFA-Wild) paper on the MFA dataset? | F-F1 score (Comb.), F-F1 score (Persian), V-F1 score (Comb.), V-F1 score (NA), F-F1 score (NA), V-F1 score (Persian) |
What metrics were used to measure the UnLoc-L model in the UnLoc: A Unified Framework for Video Localization Tasks paper on the Charades-STA dataset? | R@1 IoU=0.5, R@1 IoU=0.7, R@5 IoU=0.5, R@5 IoU=0.7 |
What metrics were used to measure the CG-DETR model in the Correlation-guided Query-Dependency Calibration in Video Representation Learning for Temporal Grounding paper on the Charades-STA dataset? | R@1 IoU=0.5, R@1 IoU=0.7, R@5 IoU=0.5, R@5 IoU=0.7 |
What metrics were used to measure the UnLoc-B model in the UnLoc: A Unified Framework for Video Localization Tasks paper on the Charades-STA dataset? | R@1 IoU=0.5, R@1 IoU=0.7, R@5 IoU=0.5, R@5 IoU=0.7 |
What metrics were used to measure the QD-DETR (Only Video) model in the Query-Dependent Video Representation for Moment Retrieval and Highlight Detection paper on the Charades-STA dataset? | R@1 IoU=0.5, R@1 IoU=0.7, R@5 IoU=0.5, R@5 IoU=0.7 |
What metrics were used to measure the Moment-DETR w/ PT (on 10K HowTo100M videos) model in the QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries paper on the Charades-STA dataset? | R@1 IoU=0.5, R@1 IoU=0.7, R@5 IoU=0.5, R@5 IoU=0.7 |
What metrics were used to measure the Moment-DETR model in the QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries paper on the Charades-STA dataset? | R@1 IoU=0.5, R@1 IoU=0.7, R@5 IoU=0.5, R@5 IoU=0.7 |
What metrics were used to measure the UMT (VO) model in the UMT: Unified Multi-modal Transformers for Joint Video Moment Retrieval and Highlight Detection paper on the Charades-STA dataset? | R@1 IoU=0.5, R@1 IoU=0.7, R@5 IoU=0.5, R@5 IoU=0.7 |
What metrics were used to measure the UMT (VA) model in the UMT: Unified Multi-modal Transformers for Joint Video Moment Retrieval and Highlight Detection paper on the Charades-STA dataset? | R@1 IoU=0.5, R@1 IoU=0.7, R@5 IoU=0.5, R@5 IoU=0.7 |
What metrics were used to measure the SimVTP model in the SimVTP: Simple Video Text Pre-training with Masked Autoencoders paper on the Charades-STA dataset? | R@1 IoU=0.5, R@1 IoU=0.7, R@5 IoU=0.5, R@5 IoU=0.7 |
What metrics were used to measure the CG-DETR (w/ PT) model in the Correlation-guided Query-Dependency Calibration in Video Representation Learning for Temporal Grounding paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the UniVTG (w/ PT) model in the UniVTG: Towards Unified Video-Language Temporal Grounding paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the CG-DETR model in the Correlation-guided Query-Dependency Calibration in Video Representation Learning for Temporal Grounding paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the QD-DETR (w/ PT) model in the Query-Dependent Video Representation for Moment Retrieval and Highlight Detection paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the QD-DETR model in the Query-Dependent Video Representation for Moment Retrieval and Highlight Detection paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the QD-DETR (only Video w/ PT) model in the Query-Dependent Video Representation for Moment Retrieval and Highlight Detection paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the QD-DETR (only Video) model in the Query-Dependent Video Representation for Moment Retrieval and Highlight Detection paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the UMT (w. PT) model in the UMT: Unified Multi-modal Transformers for Joint Video Moment Retrieval and Highlight Detection paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the Moment-DETR w/ PT model in the QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the UMT model in the UMT: Unified Multi-modal Transformers for Joint Video Moment Retrieval and Highlight Detection paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the UniVTG model in the UniVTG: Towards Unified Video-Language Temporal Grounding paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the SeViLA-Localizer model in the Self-Chained Image-Language Model for Video Localization and Question Answering paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the UnLoc-L model in the UnLoc: A Unified Framework for Video Localization Tasks paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the UnLoc-B model in the UnLoc: A Unified Framework for Video Localization Tasks paper on the QVHighlights dataset? | mAP, R@1 IoU=0.5, R@1 IoU=0.7, mAP@0.5, mAP@0.75 |
What metrics were used to measure the TagRec(BERT+USE) model in the TagRec: Automated Tagging of Questions with Hierarchical Learning Taxonomy paper on the QC-Science dataset? | R@5, R@10, R@15, R@20 |
What metrics were used to measure the TagRec(BERT+Sent BERT) model in the TagRec: Automated Tagging of Questions with Hierarchical Learning Taxonomy paper on the QC-Science dataset? | R@5, R@10, R@15, R@20 |
What metrics were used to measure the BERT+sent2vec model in the TagRec: Automated Tagging of Questions with Hierarchical Learning Taxonomy paper on the QC-Science dataset? | R@5, R@10, R@15, R@20 |
What metrics were used to measure the BERT+GloVe model in the TagRec: Automated Tagging of Questions with Hierarchical Learning Taxonomy paper on the QC-Science dataset? | R@5, R@10, R@15, R@20 |
What metrics were used to measure the Twin BERT model in the TagRec: Automated Tagging of Questions with Hierarchical Learning Taxonomy paper on the QC-Science dataset? | R@5, R@10, R@15, R@20 |
What metrics were used to measure the Pretrained Sent BERT model in the TagRec: Automated Tagging of Questions with Hierarchical Learning Taxonomy paper on the QC-Science dataset? | R@5, R@10, R@15, R@20 |
What metrics were used to measure the MobileNetV2 model in the Revising deep learning methods in parking lot occupancy detection paper on the Action-Camera Parking dataset? | F1-score |
What metrics were used to measure the VGG-19 model in the Revising deep learning methods in parking lot occupancy detection paper on the Action-Camera Parking dataset? | F1-score |
What metrics were used to measure the EfficientNet-P model in the Revising deep learning methods in parking lot occupancy detection paper on the Action-Camera Parking dataset? | F1-score |
What metrics were used to measure the mAlexNet model in the Revising deep learning methods in parking lot occupancy detection paper on the Action-Camera Parking dataset? | F1-score |
What metrics were used to measure the ResNet50 model in the Revising deep learning methods in parking lot occupancy detection paper on the Action-Camera Parking dataset? | F1-score |
What metrics were used to measure the CFEN model in the Revising deep learning methods in parking lot occupancy detection paper on the Action-Camera Parking dataset? | F1-score |
What metrics were used to measure the ViT model in the Revising deep learning methods in parking lot occupancy detection paper on the Action-Camera Parking dataset? | F1-score |
What metrics were used to measure the EfficientNet-P model in the Revising deep learning methods in parking lot occupancy detection paper on the ACMPS dataset? | F1-score |
What metrics were used to measure the MobileNetV2 model in the Revising deep learning methods in parking lot occupancy detection paper on the ACMPS dataset? | F1-score |
What metrics were used to measure the CarNet model in the Revising deep learning methods in parking lot occupancy detection paper on the ACMPS dataset? | F1-score |
What metrics were used to measure the CFEN model in the Revising deep learning methods in parking lot occupancy detection paper on the ACMPS dataset? | F1-score |
What metrics were used to measure the ResNet50 model in the Revising deep learning methods in parking lot occupancy detection paper on the ACMPS dataset? | F1-score |
What metrics were used to measure the EfficientNet-P model in the Revising deep learning methods in parking lot occupancy detection paper on the CNRPark+EXT dataset? | F1-score |
What metrics were used to measure the MobileNetV2 model in the Revising deep learning methods in parking lot occupancy detection paper on the CNRPark+EXT dataset? | F1-score |
What metrics were used to measure the VGG-19 model in the Revising deep learning methods in parking lot occupancy detection paper on the CNRPark+EXT dataset? | F1-score |
What metrics were used to measure the ResNet50 model in the Revising deep learning methods in parking lot occupancy detection paper on the CNRPark+EXT dataset? | F1-score |
What metrics were used to measure the CarNet model in the Revising deep learning methods in parking lot occupancy detection paper on the CNRPark+EXT dataset? | F1-score |
What metrics were used to measure the ViT model in the Revising deep learning methods in parking lot occupancy detection paper on the CNRPark+EXT dataset? | F1-score |
What metrics were used to measure the CFEN model in the Revising deep learning methods in parking lot occupancy detection paper on the CNRPark+EXT dataset? | F1-score |
What metrics were used to measure the T-YOLO model in the T-YOLO: Tiny Vehicle Detection Based on YOLO and Multi-Scale Convolutional Neural Networks paper on the PKLot dataset? | Average-mAP, F1-score |
What metrics were used to measure the EfficientNet-P model in the Revising deep learning methods in parking lot occupancy detection paper on the PKLot dataset? | Average-mAP, F1-score |
What metrics were used to measure the VGG-19 model in the Revising deep learning methods in parking lot occupancy detection paper on the PKLot dataset? | Average-mAP, F1-score |
What metrics were used to measure the ResNet50 model in the Revising deep learning methods in parking lot occupancy detection paper on the PKLot dataset? | Average-mAP, F1-score |
What metrics were used to measure the EfficientNet-P model in the Revising deep learning methods in parking lot occupancy detection paper on the SPKL dataset? | F1-score |
What metrics were used to measure the ViT model in the Revising deep learning methods in parking lot occupancy detection paper on the SPKL dataset? | F1-score |
What metrics were used to measure the CarNet model in the Revising deep learning methods in parking lot occupancy detection paper on the SPKL dataset? | F1-score |
What metrics were used to measure the MobileNetV2 model in the Revising deep learning methods in parking lot occupancy detection paper on the SPKL dataset? | F1-score |
What metrics were used to measure the VGG-19 model in the Revising deep learning methods in parking lot occupancy detection paper on the SPKL dataset? | F1-score |
What metrics were used to measure the ResNet50 model in the Revising deep learning methods in parking lot occupancy detection paper on the SPKL dataset? | F1-score |
What metrics were used to measure the CFEN model in the Revising deep learning methods in parking lot occupancy detection paper on the SPKL dataset? | F1-score |
What metrics were used to measure the HOW+ASMK model in the The 2021 Image Similarity Dataset and Challenge paper on the DISC21 dev dataset? | w/o normalization, with normalization, Time (ms), dimension, hardware |
What metrics were used to measure the Multigrain 1500 dim model in the The 2021 Image Similarity Dataset and Challenge paper on the DISC21 dev dataset? | w/o normalization, with normalization, Time (ms), dimension, hardware |
What metrics were used to measure the GIST PCA 256 model in the The 2021 Image Similarity Dataset and Challenge paper on the DISC21 dev dataset? | w/o normalization, with normalization, Time (ms), dimension, hardware |
What metrics were used to measure the GIST 960 dim model in the The 2021 Image Similarity Dataset and Challenge paper on the DISC21 dev dataset? | w/o normalization, with normalization, Time (ms), dimension, hardware |
What metrics were used to measure the C3PO model in the A Structural Model for Contextual Code Changes paper on the C# EditCompletion dataset? | Accuracy |
What metrics were used to measure the LaserTagger (Transformer) model in the A Structural Model for Contextual Code Changes paper on the C# EditCompletion dataset? | Accuracy |
What metrics were used to measure the LaserTagger (LSTM) model in the A Structural Model for Contextual Code Changes paper on the C# EditCompletion dataset? | Accuracy |
What metrics were used to measure the SequenceR (Transformer) model in the A Structural Model for Contextual Code Changes paper on the C# EditCompletion dataset? | Accuracy |
What metrics were used to measure the SequenceR model in the A Structural Model for Contextual Code Changes paper on the C# EditCompletion dataset? | Accuracy |
What metrics were used to measure the Path2Tree (Transformer) model in the A Structural Model for Contextual Code Changes paper on the C# EditCompletion dataset? | Accuracy |
What metrics were used to measure the Path2Tree (LSTM) model in the A Structural Model for Contextual Code Changes paper on the C# EditCompletion dataset? | Accuracy |
What metrics were used to measure the BSPM-EM model in the Blurring-Sharpening Process Models for Collaborative Filtering paper on the Gowalla dataset? | Recall@20 |
What metrics were used to measure the BSPM-LM model in the Blurring-Sharpening Process Models for Collaborative Filtering paper on the Gowalla dataset? | Recall@20 |
What metrics were used to measure the LT-OCF model in the LT-OCF: Learnable-Time ODE-based Collaborative Filtering paper on the Gowalla dataset? | Recall@20 |
What metrics were used to measure the UltraGCN model in the UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation paper on the Gowalla dataset? | Recall@20 |
What metrics were used to measure the GF-CF model in the How Powerful is Graph Convolution for Recommendation? paper on the Gowalla dataset? | Recall@20 |
What metrics were used to measure the LightGCN model in the LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation paper on the Gowalla dataset? | Recall@20 |
What metrics were used to measure the NGCF model in the Neural Graph Collaborative Filtering paper on the Gowalla dataset? | Recall@20 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.