prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the SEA model in the SEA: Sentence Encoder Assembly for Video Retrieval by Textual Queries paper on the TRECVID-AVS17 (IACC.3) dataset? | infAP |
What metrics were used to measure the Dual Encoding model in the Dual Encoding for Video Retrieval by Text paper on the TRECVID-AVS17 (IACC.3) dataset? | infAP |
What metrics were used to measure the W2VV++ model in the W2VV++: Fully Deep Learning for Ad-hoc Video Search paper on the TRECVID-AVS17 (IACC.3) dataset? | infAP |
What metrics were used to measure the LAFF model in the Lightweight Attentional Feature Fusion: A New Baseline for Text-to-Video Retrieval paper on the TRECVID-AVS16 (IACC.3) dataset? | infAP |
What metrics were used to measure the SEA model in the SEA: Sentence Encoder Assembly for Video Retrieval by Textual Queries paper on the TRECVID-AVS16 (IACC.3) dataset? | infAP |
What metrics were used to measure the Dual Encoding model in the Dual Encoding for Video Retrieval by Text paper on the TRECVID-AVS16 (IACC.3) dataset? | infAP |
What metrics were used to measure the W2VV++ model in the W2VV++: Fully Deep Learning for Ad-hoc Video Search paper on the TRECVID-AVS16 (IACC.3) dataset? | infAP |
What metrics were used to measure the LAFF model in the Lightweight Attentional Feature Fusion: A New Baseline for Text-to-Video Retrieval paper on the TRECVID-AVS20 (V3C1) dataset? | infAP |
What metrics were used to measure the LAFF model in the Lightweight Attentional Feature Fusion: A New Baseline for Text-to-Video Retrieval paper on the TRECVID-AVS18 (IACC.3) dataset? | infAP |
What metrics were used to measure the SEA model in the SEA: Sentence Encoder Assembly for Video Retrieval by Textual Queries paper on the TRECVID-AVS18 (IACC.3) dataset? | infAP |
What metrics were used to measure the W2VV++ model in the W2VV++: Fully Deep Learning for Ad-hoc Video Search paper on the TRECVID-AVS18 (IACC.3) dataset? | infAP |
What metrics were used to measure the Dual Encoding model in the Dual Encoding for Video Retrieval by Text paper on the TRECVID-AVS18 (IACC.3) dataset? | infAP |
What metrics were used to measure the LAFF model in the Lightweight Attentional Feature Fusion: A New Baseline for Text-to-Video Retrieval paper on the TRECVID-AVS19 (V3C1) dataset? | infAP |
What metrics were used to measure the SEA model in the SEA: Sentence Encoder Assembly for Video Retrieval by Textual Queries paper on the TRECVID-AVS19 (V3C1) dataset? | infAP |
What metrics were used to measure the REL-RWMD k-NN model in the Speeding up Word Mover's Distance and its variants via properties of distances between embeddings paper on the Classic dataset? | Accuracy |
What metrics were used to measure the ApproxRepSet model in the Rep the Set: Neural Networks for Learning Set Representations paper on the Classic dataset? | Accuracy |
What metrics were used to measure the SciNCL model in the Neighborhood Contrastive Learning for Scientific Document Representations with Citation Embeddings paper on the SciDocs (MeSH) dataset? | F1 (micro) |
What metrics were used to measure the SPECTER model in the SPECTER: Document-level Representation Learning using Citation-informed Transformers paper on the SciDocs (MeSH) dataset? | F1 (micro) |
What metrics were used to measure the Document Classification Using Importance of Sentences model in the Improving Document-Level Sentiment Classification Using Importance of Sentences paper on the IMDb-M dataset? | Accuracy |
What metrics were used to measure the LSTM-reg (single model) model in the Rethinking Complex Neural Network Architectures for Document Classification paper on the IMDb-M dataset? | Accuracy |
What metrics were used to measure the ApproxRepSet model in the Rep the Set: Neural Networks for Learning Set Representations paper on the Amazon dataset? | Accuracy |
What metrics were used to measure the Orthogonalized Soft VSM model in the Text classification with word embedding regularization and soft similarity measure paper on the Amazon dataset? | Accuracy |
What metrics were used to measure the REL-RWMD k-NN model in the Speeding up Word Mover's Distance and its variants via properties of distances between embeddings paper on the Amazon dataset? | Accuracy |
What metrics were used to measure the ConvTextTM model in the ConvTextTM: An Explainable Convolutional Tsetlin Machine Framework for Text Classification paper on the WOS-5736 dataset? | Accuracy |
What metrics were used to measure the HDLTex model in the HDLTex: Hierarchical Deep Learning for Text Classification paper on the WOS-5736 dataset? | Accuracy |
What metrics were used to measure the ACNet model in the Adaptively Connected Neural Networks paper on the Cora dataset? | Accuracy |
What metrics were used to measure the LGCN model in the Large-Scale Learnable Graph Convolutional Networks paper on the Cora dataset? | Accuracy |
What metrics were used to measure the GAT model in the Graph Attention Networks paper on the Cora dataset? | Accuracy |
What metrics were used to measure the MoNet model in the Geometric deep learning on graphs and manifolds using mixture model CNNs paper on the Cora dataset? | Accuracy |
What metrics were used to measure the Planetoid* model in the Revisiting Semi-Supervised Learning with Graph Embeddings paper on the Cora dataset? | Accuracy |
What metrics were used to measure the DeepWalk model in the DeepWalk: Online Learning of Social Representations paper on the Cora dataset? | Accuracy |
What metrics were used to measure the ApproxRepSet model in the Rep the Set: Neural Networks for Learning Set Representations paper on the Reuters-21578 dataset? | Accuracy, F1 |
What metrics were used to measure the REL-RWMD k-NN model in the Speeding up Word Mover's Distance and its variants via properties of distances between embeddings paper on the Reuters-21578 dataset? | Accuracy, F1 |
What metrics were used to measure the Orthogonalized Soft VSM model in the Text classification with word embedding regularization and soft similarity measure paper on the Reuters-21578 dataset? | Accuracy, F1 |
What metrics were used to measure the MAGNET model in the MAGNET: Multi-Label Text Classification using Attention-based Graph Neural Network paper on the Reuters-21578 dataset? | Accuracy, F1 |
What metrics were used to measure the VLAWE model in the Vector of Locally-Aggregated Word Embeddings (VLAWE): A Novel Document-level Representation paper on the Reuters-21578 dataset? | Accuracy, F1 |
What metrics were used to measure the KD-LSTMreg model in the DocBERT: BERT for Document Classification paper on the Reuters-21578 dataset? | Accuracy, F1 |
What metrics were used to measure the LSTM-reg (single model) model in the Rethinking Complex Neural Network Architectures for Document Classification paper on the Reuters-21578 dataset? | Accuracy, F1 |
What metrics were used to measure the SCDV-MS model in the Improving Document Classification with Multi-Sense Embeddings paper on the Reuters-21578 dataset? | Accuracy, F1 |
What metrics were used to measure the HDLTex model in the HDLTex: Hierarchical Deep Learning for Text Classification paper on the WOS-46985 dataset? | Accuracy |
What metrics were used to measure the MPAD-path model in the Message Passing Attention Networks for Document Understanding paper on the BBCSport dataset? | Accuracy |
What metrics were used to measure the Orthogonalized Soft VSM model in the Text classification with word embedding regularization and soft similarity measure paper on the BBCSport dataset? | Accuracy |
What metrics were used to measure the ApproxRepSet model in the Rep the Set: Neural Networks for Learning Set Representations paper on the BBCSport dataset? | Accuracy |
What metrics were used to measure the REL-RWMD k-NN model in the Speeding up Word Mover's Distance and its variants via properties of distances between embeddings paper on the BBCSport dataset? | Accuracy |
What metrics were used to measure the BilBOWA model in the BilBOWA: Fast Bilingual Distributed Representations without Word Alignments paper on the Reuters En-De dataset? | Accuracy |
What metrics were used to measure the KD-LSTMreg model in the DocBERT: BERT for Document Classification paper on the Yelp-14 dataset? | Accuracy |
What metrics were used to measure the BilBOWA model in the BilBOWA: Fast Bilingual Distributed Representations without Word Alignments paper on the Reuters De-En dataset? | Accuracy |
What metrics were used to measure the HDLTex model in the HDLTex: Hierarchical Deep Learning for Text Classification paper on the WOS-11967 dataset? | Accuracy |
What metrics were used to measure the SPECTER model in the SPECTER: Document-level Representation Learning using Citation-informed Transformers paper on the SciDocs (MAG) dataset? | F1 (micro) |
What metrics were used to measure the SciNCL model in the Neighborhood Contrastive Learning for Scientific Document Representations with Citation Embeddings paper on the SciDocs (MAG) dataset? | F1 (micro) |
What metrics were used to measure the BioLinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the HOC dataset? | F1, Micro F1 |
What metrics were used to measure the NCBI_BERT(large) (P) model in the Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets paper on the HOC dataset? | F1, Micro F1 |
What metrics were used to measure the SciFive-large model in the SciFive: a text-to-text transformer model for biomedical literature paper on the HOC dataset? | F1, Micro F1 |
What metrics were used to measure the BioGPT model in the BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining paper on the HOC dataset? | F1, Micro F1 |
What metrics were used to measure the PubMedBERT uncased model in the Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing paper on the HOC dataset? | F1, Micro F1 |
What metrics were used to measure the KD-LSTMreg model in the DocBERT: BERT for Document Classification paper on the AAPD dataset? | F1 |
What metrics were used to measure the MAGNET model in the MAGNET: Multi-Label Text Classification using Attention-based Graph Neural Network paper on the AAPD dataset? | F1 |
What metrics were used to measure the MPAD-path model in the Message Passing Attention Networks for Document Understanding paper on the MPQA dataset? | Accuracy |
What metrics were used to measure the ApproxRepSet model in the Rep the Set: Neural Networks for Learning Set Representations paper on the Twitter dataset? | Accuracy |
What metrics were used to measure the REL-RWMD k-NN model in the Speeding up Word Mover's Distance and its variants via properties of distances between embeddings paper on the Twitter dataset? | Accuracy |
What metrics were used to measure the Orthogonalized Soft VSM model in the Text classification with word embedding regularization and soft similarity measure paper on the Twitter dataset? | Accuracy |
What metrics were used to measure the ApproxRepSet model in the Rep the Set: Neural Networks for Learning Set Representations paper on the Recipe dataset? | Accuracy |
What metrics were used to measure the REL-RWMD k-NN model in the Speeding up Word Mover's Distance and its variants via properties of distances between embeddings paper on the Recipe dataset? | Accuracy |
What metrics were used to measure the CodeTrans-MT-TF-Large model in the CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing paper on the DeepAPI dataset? | BLEU-4 |
What metrics were used to measure the ResNet50 model in the Unsupervised Learning of Visual Features by Contrasting Cluster Assignments paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ResNet50 model in the What Makes for Good Views for Contrastive Learning? paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ResNet50 model in the Improved Baselines with Momentum Contrastive Learning paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ResNet50 model in the A Simple Framework for Contrastive Learning of Visual Representations paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ResNet50 (v2) model in the Prototypical Contrastive Learning of Unsupervised Representations paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ResNet50 (v2) model in the Data-Efficient Image Recognition with Contrastive Predictive Coding paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ResNet50 model in the Self-Supervised Learning of Pretext-Invariant Representations paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ResNet50 model in the Prototypical Contrastive Learning of Unsupervised Representations paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ResNet50 model in the Self-labelling via simultaneous clustering and representation learning paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ResNet50 (4×) model in the Large Scale Adversarial Representation Learning paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ResNet50 model in the Momentum Contrast for Unsupervised Visual Representation Learning paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ResNet50 model in the Local Aggregation for Unsupervised Learning of Visual Embeddings paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ResNet50 model in the Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ResNet v2 101 model in the Data-Efficient Image Recognition with Contrastive Predictive Coding paper on the imagenet-1k dataset? | ImageNet Top-1 Accuracy |
What metrics were used to measure the ByteCover2 model in the BYTECOVER2: TOWARDS DIMENSIONALITY REDUCTION OF LATENT EMBEDDING FOR EFFICIENT COVER SONG IDENTIFICATION paper on the Da-TACOS dataset? | mAP |
What metrics were used to measure the ByteCover model in the ByteCover: Cover Song Identification via Multi-Loss Training paper on the Da-TACOS dataset? | mAP |
What metrics were used to measure the SCNN-M model in the SIMILARITY LEARNING FOR COVER SONG IDENTIFICATION USING CROSS-SIMILARITY MATRICES OF MULTI-LEVEL DEEP SEQUENCES paper on the YouTube350 dataset? | MAP |
What metrics were used to measure the ByteCover model in the ByteCover: Cover Song Identification via Multi-Loss Training paper on the YouTube350 dataset? | MAP |
What metrics were used to measure the CQT-Net model in the Learning a Representation for Cover Song Identification Using Convolutional Neural Network paper on the YouTube350 dataset? | MAP |
What metrics were used to measure the MOVE model in the Accurate and Scalable Version Identification Using Musically-Motivated Embeddings paper on the YouTube350 dataset? | MAP |
What metrics were used to measure the ByteCover2 model in the BYTECOVER2: TOWARDS DIMENSIONALITY REDUCTION OF LATENT EMBEDDING FOR EFFICIENT COVER SONG IDENTIFICATION paper on the Covers80 dataset? | MAP |
What metrics were used to measure the SCNN-M model in the SIMILARITY LEARNING FOR COVER SONG IDENTIFICATION USING CROSS-SIMILARITY MATRICES OF MULTI-LEVEL DEEP SEQUENCES paper on the Covers80 dataset? | MAP |
What metrics were used to measure the ByteCover model in the ByteCover: Cover Song Identification via Multi-Loss Training paper on the Covers80 dataset? | MAP |
What metrics were used to measure the MOVE model in the Accurate and Scalable Version Identification Using Musically-Motivated Embeddings paper on the Covers80 dataset? | MAP |
What metrics were used to measure the CQT-Net model in the Learning a Representation for Cover Song Identification Using Convolutional Neural Network paper on the Covers80 dataset? | MAP |
What metrics were used to measure the Bytecover model in the BYTECOVER2: TOWARDS DIMENSIONALITY REDUCTION OF LATENT EMBEDDING FOR EFFICIENT COVER SONG IDENTIFICATION paper on the SHS100K-TEST dataset? | mAP |
What metrics were used to measure the ByteCover model in the ByteCover: Cover Song Identification via Multi-Loss Training paper on the SHS100K-TEST dataset? | mAP |
What metrics were used to measure the SCNN-M model in the SIMILARITY LEARNING FOR COVER SONG IDENTIFICATION USING CROSS-SIMILARITY MATRICES OF MULTI-LEVEL DEEP SEQUENCES paper on the SHS100K-TEST dataset? | mAP |
What metrics were used to measure the CQT-Net model in the Learning a Representation for Cover Song Identification Using Convolutional Neural Network paper on the SHS100K-TEST dataset? | mAP |
What metrics were used to measure the M3E-Yolo model in the M3E-Yolo: A New Lightweight Network for Traffic Sign Recognition paper on the CCTSDB2021 dataset? | mAP@0.5 |
What metrics were used to measure the Faster R-CNN Inception Resnet V2 model in the Evaluation of deep neural networks for traffic sign detection systems paper on the GTSDB dataset? | mAP |
What metrics were used to measure the CABNet model in the Context-Aware Block Net for Small Object Detection paper on the TT100K dataset? | mAP@0.5 |
What metrics were used to measure the DiverseMotion (s=1) model in the DiverseMotion: Towards Diverse Human Motion Generation via Discrete Diffusion paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the DiverseMotion (s=2) model in the DiverseMotion: Towards Diverse Human Motion Generation via Discrete Diffusion paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the ReMoDiffuse model in the ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the AttT2M model in the AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.