prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the DeepStruct multi-task w/ finetune model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the ATIS dataset? | Accuracy, F1, Intent Accuracy |
What metrics were used to measure the DeepStruct multi-task model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the ATIS dataset? | Accuracy, F1, Intent Accuracy |
What metrics were used to measure the Context Encoder model in the Improving Slot Filling by Utilizing Contextual Information paper on the ATIS dataset? | Accuracy, F1, Intent Accuracy |
What metrics were used to measure the CM-Net model in the CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding paper on the CAIS dataset? | Acc |
What metrics were used to measure the plain-LSTM model in the “Where is My Parcel?” Fast and Efficient Classifiers to Detect User Intent in Natural Language paper on the ASOS.com user intent dataset? | F1 |
What metrics were used to measure the linear-Ngrams model in the “Where is My Parcel?” Fast and Efficient Classifiers to Detect User Intent in Natural Language paper on the ASOS.com user intent dataset? | F1 |
What metrics were used to measure the glove-LSTM model in the “Where is My Parcel?” Fast and Efficient Classifiers to Detect User Intent in Natural Language paper on the ASOS.com user intent dataset? | F1 |
What metrics were used to measure the RoBERTa-Large + ICDA model in the Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information paper on the CLINC150 10-shot dataset? | Accuracy (%) |
What metrics were used to measure the RoBERTa-Large + ICDA model in the Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information paper on the HWU64 dataset? | Accuracy (%) |
What metrics were used to measure the RoBERTa-Large + ICDA model in the Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information paper on the BANKING77 5-shot dataset? | Accuracy (%) |
What metrics were used to measure the STCNN-V2 (Vote decision) model in the Baseline Method for the Sport Task of MediaEval 2022 with 3D CNNs using Attention Mechanisms paper on the TTStroke-21 ME22 dataset? | IoU, mAP |
What metrics were used to measure the RGB and PRGB model in the Fine-Grained Action Detection with RGB and Pose Information using Two Stream Convolutional Networks paper on the TTStroke-21 ME22 dataset? | IoU, mAP |
What metrics were used to measure the T-CNN model in the Tube Convolutional Neural Network (T-CNN) for Action Detection in Videos paper on the UCF Sports dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the MR-TS R-CNN model in the Multi-region two-stream R-CNN for action detection paper on the UCF Sports dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the TS R-CNN model in the Multi-region two-stream R-CNN for action detection paper on the UCF Sports dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the Action Tubes model in the Finding Action Tubes paper on the UCF Sports dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the DTS model in the Finding Action Tubes with a Sparse-to-Dense Framework paper on the UCF Sports dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the Two-in-one Two Stream model in the Dance with Flow: Two-in-One Stream Action Detection paper on the UCF Sports dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the Two-in-one model in the Dance with Flow: Two-in-One Stream Action Detection paper on the UCF Sports dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the TTM model in the Token Turing Machines paper on the Charades dataset? | mAP |
What metrics were used to measure the CTRN model in the CTRN: Class-Temporal Relational Network for Action Detection paper on the Charades dataset? | mAP |
What metrics were used to measure the Coarse-Fine Networks (w/ self-supervised detection pretraining) model in the Weakly-guided Self-supervised Pretraining for Temporal Activity Detection paper on the Charades dataset? | mAP |
What metrics were used to measure the PDAN (RGB+Flow) model in the PDAN: Pyramid Dilated Attention Network for Action Detection paper on the Charades dataset? | mAP |
What metrics were used to measure the PAT model in the PAT: Position-Aware Transformer for Dense Multi-Label Action Detection paper on the Charades dataset? | mAP |
What metrics were used to measure the MS-TCT (RGB only) model in the MS-TCT: Multi-Scale Temporal ConvTransformer for Action Detection paper on the Charades dataset? | mAP |
What metrics were used to measure the 3D ResNet-50 + super-events pretrained on AViD model in the AViD Dataset: Anonymized Videos from Diverse Countries paper on the Charades dataset? | mAP |
What metrics were used to measure the Coarse-Fine Networks model in the Coarse-Fine Networks for Temporal Activity Detection in Videos paper on the Charades dataset? | mAP |
What metrics were used to measure the I3D + biGRU + VS-ST-MPNN model in the Representation Learning on Visual-Symbolic Graphs for Video Understanding paper on the Charades dataset? | mAP |
What metrics were used to measure the MLAD (RGB + Flow) model in the Modeling Multi-Label Action Dependencies for Temporal Action Localization paper on the Charades dataset? | mAP |
What metrics were used to measure the 3D ResNet-50 pretrained on AViD model in the AViD Dataset: Anonymized Videos from Diverse Countries paper on the Charades dataset? | mAP |
What metrics were used to measure the TGM (RGB+Flow) model in the Temporal Gaussian Mixture Layer for Videos paper on the Charades dataset? | mAP |
What metrics were used to measure the Super-events (RGB+Flow) model in the Learning Latent Super-Events to Detect Multiple Activities in Videos paper on the Charades dataset? | mAP |
What metrics were used to measure the R-C3D model in the R-C3D: Region Convolutional 3D Network for Temporal Activity Detection paper on the Charades dataset? | mAP |
What metrics were used to measure the Sigurdsson et al. model in the Asynchronous Temporal Fields for Action Recognition paper on the Charades dataset? | mAP |
What metrics were used to measure the HIT model in the Holistic Interaction Transformer Network for Action Detection paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the HISAN (VGG-16) model in the Hierarchical Self-Attention Network for Action Localization in Videos paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the YOWO + LFB model in the You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the YOWO model in the You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the MOC model in the Actions as Moving Points paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the Faster-RCNN + two-stream I3D conv model in the AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the TACNet model in the TACNet: Transition-Aware Context Network for Spatio-Temporal Action Detection paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the T-CNN model in the Tube Convolutional Neural Network (T-CNN) for Action Detection in Videos paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the MR-TS R-CNN model in the Multi-region two-stream R-CNN for action detection paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the TS R-CNN model in the Multi-region two-stream R-CNN for action detection paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the Actionness model in the Actionness Estimation Using Hybrid Fully Convolutional Networks paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the Action Tubes model in the Finding Action Tubes paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the HISAN (ResNet-101 + FPN) model in the Hierarchical Self-Attention Network for Action Localization in Videos paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the DTS model in the Finding Action Tubes with a Sparse-to-Dense Framework paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the Two-in-one Two Stream model in the Dance with Flow: Two-in-One Stream Action Detection paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the Two-in-one model in the Dance with Flow: Two-in-One Stream Action Detection paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the Actionnness model in the Actionness Estimation Using Hybrid Fully Convolutional Networks paper on the J-HMDB dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the PAT model in the PAT: Position-Aware Transformer for Dense Multi-Label Action Detection paper on the MultiTHUMOS dataset? | mAP |
What metrics were used to measure the HIT model in the Holistic Interaction Transformer Network for Action Detection paper on the MultiSports dataset? | Frame-mAP 0.5, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the TadML-two stream model in the TadML: A fast temporal action detection with Mechanics-MLP paper on the THUMOS' 14 dataset? | mAP |
What metrics were used to measure the TadML-rgb model in the TadML: A fast temporal action detection with Mechanics-MLP paper on the THUMOS' 14 dataset? | mAP |
What metrics were used to measure the STCNN model in the Spatio-Temporal CNN baseline method for the Sports Video Task of MediaEval 2021 benchmark paper on the TTStroke-21 ME21 dataset? | IoU, mAP |
What metrics were used to measure the Two Stream Network model in the Two Stream Network for Stroke Detection in Table Tennis paper on the TTStroke-21 ME21 dataset? | IoU, mAP |
What metrics were used to measure the STAR/L model in the End-to-End Spatio-Temporal Action Localisation with Video Transformers paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the YOWO + LFB model in the You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the HIT model in the Holistic Interaction Transformer Network for Action Detection paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the YOWO model in the You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the MOC model in the Actions as Moving Points paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the Faster-RCNN + two-stream I3D conv model in the AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the STEP model in the STEP: Spatio-Temporal Progressive Learning for Video Action Detection paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the HISAN (VGG-16) model in the Hierarchical Self-Attention Network for Action Localization in Videos paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the TACNet model in the TACNet: Transition-Aware Context Network for Spatio-Temporal Action Detection paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the T-CNN model in the Tube Convolutional Neural Network (T-CNN) for Action Detection in Videos paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the TS R-CNN model in the Multi-region two-stream R-CNN for action detection paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the MR-TS R-CNN model in the Multi-region two-stream R-CNN for action detection paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the HISAN (ResNet-101 + FPN) model in the Hierarchical Self-Attention Network for Action Localization in Videos paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the Two-in-one Two Stream model in the Dance with Flow: Two-in-One Stream Action Detection paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the Two-in-one model in the Dance with Flow: Two-in-One Stream Action Detection paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the DTS model in the Finding Action Tubes with a Sparse-to-Dense Framework paper on the UCF101-24 dataset? | Frame-mAP 0.5, Video-mAP 0.1, Video-mAP 0.2, Video-mAP 0.5 |
What metrics were used to measure the PDAN model in the PDAN: Pyramid Dilated Attention Network for Action Detection paper on the TSU dataset? | Frame-mAP |
What metrics were used to measure the MS-TCT model in the MS-TCT: Multi-Scale Temporal ConvTransformer for Action Detection paper on the TSU dataset? | Frame-mAP |
What metrics were used to measure the MLAD model in the Modeling Multi-Label Action Dependencies for Temporal Action Localization paper on the Multi-THUMOS dataset? | mAP |
What metrics were used to measure the CTRN model in the CTRN: Class-Temporal Relational Network for Action Detection paper on the Multi-THUMOS dataset? | mAP |
What metrics were used to measure the PDAN model in the PDAN: Pyramid Dilated Attention Network for Action Detection paper on the Multi-THUMOS dataset? | mAP |
What metrics were used to measure the TGM model in the Temporal Gaussian Mixture Layer for Videos paper on the Multi-THUMOS dataset? | mAP |
What metrics were used to measure the MS-TCT (RGB only) model in the MS-TCT: Multi-Scale Temporal ConvTransformer for Action Detection paper on the Multi-THUMOS dataset? | mAP |
What metrics were used to measure the I3D + our super-event model in the Learning Latent Super-Events to Detect Multiple Activities in Videos paper on the Multi-THUMOS dataset? | mAP |
What metrics were used to measure the Two-stream + LSTM model in the Every Moment Counts: Dense Detailed Labeling of Actions in Complex Videos paper on the Multi-THUMOS dataset? | mAP |
What metrics were used to measure the Two-stream model in the Every Moment Counts: Dense Detailed Labeling of Actions in Complex Videos paper on the Multi-THUMOS dataset? | mAP |
What metrics were used to measure the Ensemble Model + Heuristic Post-Processing model in the A Heuristic-driven Ensemble Framework for COVID-19 Fake News Detection paper on the COVID-19 Fake News Dataset dataset? | F1 |
What metrics were used to measure the Hybrid CNNs (Text + All) model in the "Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection paper on the LIAR dataset? | Test Accuracy, Validation Accuracy |
What metrics were used to measure the CNNs model in the "Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection paper on the LIAR dataset? | Test Accuracy, Validation Accuracy |
What metrics were used to measure the Hybrid CNNs (Text + Speaker) model in the "Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection paper on the LIAR dataset? | Test Accuracy, Validation Accuracy |
What metrics were used to measure the Bi-LSTMs model in the "Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection paper on the LIAR dataset? | Test Accuracy, Validation Accuracy |
What metrics were used to measure the Sepúlveda-Torres R., Vicente M., Saquete E., Lloret E., Palomar M. (2021) model in the Exploring Summarization to Enhance Headline Stance Detection paper on the FNC-1 dataset? | Weighted Accuracy, Per-class Accuracy (Unrelated), Per-class Accuracy (Agree), Per-class Accuracy (Disagree), Per-class Accuracy (Discuss) |
What metrics were used to measure the ZAINAB A. JAWAD, AHMED J. OBAID (CNN and DNN with SCM, 2022) model in the Combination Of Convolution Neural Networks And Deep Neural Networks For Fake News Detection paper on the FNC-1 dataset? | Weighted Accuracy, Per-class Accuracy (Unrelated), Per-class Accuracy (Agree), Per-class Accuracy (Disagree), Per-class Accuracy (Discuss) |
What metrics were used to measure the Bhatt et al. model in the On the Benefit of Combining Neural, Statistical and External Features for Fake News Identification paper on the FNC-1 dataset? | Weighted Accuracy, Per-class Accuracy (Unrelated), Per-class Accuracy (Agree), Per-class Accuracy (Disagree), Per-class Accuracy (Discuss) |
What metrics were used to measure the Bi-LSTM (max-pooling, attention) model in the Combining Similarity Features and Deep Representation Learning for Stance Detection in the Context of Checking Fake News paper on the FNC-1 dataset? | Weighted Accuracy, Per-class Accuracy (Unrelated), Per-class Accuracy (Agree), Per-class Accuracy (Disagree), Per-class Accuracy (Discuss) |
What metrics were used to measure the 3rd place at FNC-1 - Team UCL Machine Reading (Riedel et al., 2017) model in the A simple but tough-to-beat baseline for the Fake News Challenge stance detection task paper on the FNC-1 dataset? | Weighted Accuracy, Per-class Accuracy (Unrelated), Per-class Accuracy (Agree), Per-class Accuracy (Disagree), Per-class Accuracy (Discuss) |
What metrics were used to measure the Neural method from Mohtarami et al. + TF-IDF (Mohtarami et al., 2018) model in the Automatic Stance Detection Using End-to-End Memory Networks paper on the FNC-1 dataset? | Weighted Accuracy, Per-class Accuracy (Unrelated), Per-class Accuracy (Agree), Per-class Accuracy (Disagree), Per-class Accuracy (Discuss) |
What metrics were used to measure the Neural method from Mohtarami et al. (Mohtarami et al., 2018) model in the Automatic Stance Detection Using End-to-End Memory Networks paper on the FNC-1 dataset? | Weighted Accuracy, Per-class Accuracy (Unrelated), Per-class Accuracy (Agree), Per-class Accuracy (Disagree), Per-class Accuracy (Discuss) |
What metrics were used to measure the Baseline based on skip-thought embeddings (Bhatt et al., 2017) model in the On the Benefit of Combining Neural, Statistical and External Features for Fake News Identification paper on the FNC-1 dataset? | Weighted Accuracy, Per-class Accuracy (Unrelated), Per-class Accuracy (Agree), Per-class Accuracy (Disagree), Per-class Accuracy (Discuss) |
What metrics were used to measure the Baseline based on word2vec + hand-crafted features (Bhatt et al., 2017) model in the On the Benefit of Combining Neural, Statistical and External Features for Fake News Identification paper on the FNC-1 dataset? | Weighted Accuracy, Per-class Accuracy (Unrelated), Per-class Accuracy (Agree), Per-class Accuracy (Disagree), Per-class Accuracy (Discuss) |
What metrics were used to measure the Neural baseline based on bi-directional LSTMs (Bhatt et al., 2017) model in the On the Benefit of Combining Neural, Statistical and External Features for Fake News Identification paper on the FNC-1 dataset? | Weighted Accuracy, Per-class Accuracy (Unrelated), Per-class Accuracy (Agree), Per-class Accuracy (Disagree), Per-class Accuracy (Discuss) |
What metrics were used to measure the SEMI-FND model in the SEMI-FND: Stacked Ensemble Based Multimodal Inference For Faster Fake News Detection paper on the Weibo NER dataset? | Accuracy |
What metrics were used to measure the Convolutional Tsetlin Machine model in the ConvTextTM: An Explainable Convolutional Tsetlin Machine Framework for Text Classification paper on the PolitiFact dataset? | 1:1 Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.