prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Stochastic Muzero model in the Planning in Stochastic Environments with a Learned Model paper on the The Game of 2048 dataset? | Average Score |
What metrics were used to measure the AlphaZero (With Simulator) model in the Planning in Stochastic Environments with a Learned Model paper on the The Game of 2048 dataset? | Average Score |
What metrics were used to measure the MuZero model in the Planning in Stochastic Environments with a Learned Model paper on the The Game of 2048 dataset? | Average Score |
What metrics were used to measure the Beam Search model in the Playing 2048 With Reinforcement Learning paper on the The Game of 2048 dataset? | Average Score |
What metrics were used to measure the DQN (1000 episodes) model in the Playing 2048 With Reinforcement Learning paper on the The Game of 2048 dataset? | Average Score |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the ClipSitu model in the ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition paper on the imSitu dataset? | Top-1 Verb, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Value |
What metrics were used to measure the CoFormer model in the Collaborative Transformers for Grounded Situation Recognition paper on the imSitu dataset? | Top-1 Verb, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Value |
What metrics were used to measure the SituFormer model in the Rethinking the Two-Stage Framework for Grounded Situation Recognition paper on the imSitu dataset? | Top-1 Verb, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Value |
What metrics were used to measure the Kernel GraphNet model in the Mixture-Kernel Graph Attention Network for Situation Recognition paper on the imSitu dataset? | Top-1 Verb, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Value |
What metrics were used to measure the GSRTR model in the Grounded Situation Recognition with Transformers paper on the imSitu dataset? | Top-1 Verb, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Value |
What metrics were used to measure the JSL model in the Grounded Situation Recognition paper on the imSitu dataset? | Top-1 Verb, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Value |
What metrics were used to measure the ISL model in the Grounded Situation Recognition paper on the imSitu dataset? | Top-1 Verb, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Value |
What metrics were used to measure the CAQ + RE-VGG model in the Attention-Based Context Aware Reasoning for Situation Recognition paper on the imSitu dataset? | Top-1 Verb, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Value |
What metrics were used to measure the GraphNet model in the Situation Recognition with Graph Neural Networks paper on the imSitu dataset? | Top-1 Verb, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Value |
What metrics were used to measure the RNN + Fusion model in the Recurrent Models for Situation Recognition paper on the imSitu dataset? | Top-1 Verb, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Value |
What metrics were used to measure the CRF + Aug model in the Commonly Uncommon: Semantic Sparsity in Situation Recognition paper on the imSitu dataset? | Top-1 Verb, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Value |
What metrics were used to measure the CRF model in the Situation Recognition: Visual Semantic Role Labeling for Image Understanding paper on the imSitu dataset? | Top-1 Verb, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Value |
What metrics were used to measure the ClipSitu model in the ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition paper on the SWiG dataset? | Top-1 Verb, Top-1 Verb & Grounded-Value, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Grounded-Value, Top-5 Verbs & Value |
What metrics were used to measure the CoFormer model in the Collaborative Transformers for Grounded Situation Recognition paper on the SWiG dataset? | Top-1 Verb, Top-1 Verb & Grounded-Value, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Grounded-Value, Top-5 Verbs & Value |
What metrics were used to measure the SituFormer model in the Rethinking the Two-Stage Framework for Grounded Situation Recognition paper on the SWiG dataset? | Top-1 Verb, Top-1 Verb & Grounded-Value, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Grounded-Value, Top-5 Verbs & Value |
What metrics were used to measure the Kernel GraphNet model in the Mixture-Kernel Graph Attention Network for Situation Recognition paper on the SWiG dataset? | Top-1 Verb, Top-1 Verb & Grounded-Value, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Grounded-Value, Top-5 Verbs & Value |
What metrics were used to measure the GSRTR model in the Grounded Situation Recognition with Transformers paper on the SWiG dataset? | Top-1 Verb, Top-1 Verb & Grounded-Value, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Grounded-Value, Top-5 Verbs & Value |
What metrics were used to measure the JSL model in the Grounded Situation Recognition paper on the SWiG dataset? | Top-1 Verb, Top-1 Verb & Grounded-Value, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Grounded-Value, Top-5 Verbs & Value |
What metrics were used to measure the ISL model in the Grounded Situation Recognition paper on the SWiG dataset? | Top-1 Verb, Top-1 Verb & Grounded-Value, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Grounded-Value, Top-5 Verbs & Value |
What metrics were used to measure the CAQ + RE-VGG model in the Attention-Based Context Aware Reasoning for Situation Recognition paper on the SWiG dataset? | Top-1 Verb, Top-1 Verb & Grounded-Value, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Grounded-Value, Top-5 Verbs & Value |
What metrics were used to measure the GraphNet model in the Situation Recognition with Graph Neural Networks paper on the SWiG dataset? | Top-1 Verb, Top-1 Verb & Grounded-Value, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Grounded-Value, Top-5 Verbs & Value |
What metrics were used to measure the RNN + Fusion model in the Recurrent Models for Situation Recognition paper on the SWiG dataset? | Top-1 Verb, Top-1 Verb & Grounded-Value, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Grounded-Value, Top-5 Verbs & Value |
What metrics were used to measure the CRF + Aug model in the Commonly Uncommon: Semantic Sparsity in Situation Recognition paper on the SWiG dataset? | Top-1 Verb, Top-1 Verb & Grounded-Value, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Grounded-Value, Top-5 Verbs & Value |
What metrics were used to measure the CRF model in the Situation Recognition: Visual Semantic Role Labeling for Image Understanding paper on the SWiG dataset? | Top-1 Verb, Top-1 Verb & Grounded-Value, Top-1 Verb & Value, Top-5 Verbs, Top-5 Verbs & Grounded-Value, Top-5 Verbs & Value |
What metrics were used to measure the CapDec model in the Text-Only Training for Image Captioning using Noise-Injected CLIP paper on the FlickrStyle10K dataset? | CIDEr |
What metrics were used to measure the Perturb, Predict & Paraphrase model in the Perturb, Predict & Paraphrase: Semi-Supervised Learning using Noisy Student for Image Captioning paper on the COCO dataset? | CIDEr |
What metrics were used to measure the CapDec model in the Text-Only Training for Image Captioning using Noise-Injected CLIP paper on the Flickr30k dataset? | CIDEr |
What metrics were used to measure the SGT model in the Extracting Temporal Event Relation with Syntax-guided Graph Transformer paper on the TB-Dense dataset? | Event Detection F-score |
What metrics were used to measure the Early Fusion (Bert + InceptionV3) model in the Image and Text fusion for UPMC Food-101 \\using BERT and CNNs paper on the Food-101 dataset? | Accuracy (%) |
What metrics were used to measure the Late Fusion (Bert + InceptionV3) model in the Image and Text fusion for UPMC Food-101 \\using BERT and CNNs paper on the Food-101 dataset? | Accuracy (%) |
What metrics were used to measure the Convolutional image feature extraction and dense concatenating model in the Multimodal price prediction paper on the CD18 dataset? | Accuracy, F-measure (%) |
What metrics were used to measure the Two Branch Network (Text - Bert + Image - Nts-Net) model in the Are These Birds Similar: Learning Branched Networks for Fine-grained Representations paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the UrbanFM model in the UrbanFM: Inferring Fine-Grained Urban Flows paper on the TaxiBJ-P3 dataset? | MSE, MAE, MAPE |
What metrics were used to measure the UrbanFM model in the UrbanFM: Inferring Fine-Grained Urban Flows paper on the TaxiBJ-P1 dataset? | MSE, MAE, MAPE |
What metrics were used to measure the UrbanFM-ne model in the UrbanFM: Inferring Fine-Grained Urban Flows paper on the TaxiBJ-P1 dataset? | MSE, MAE, MAPE |
What metrics were used to measure the DeepSD model in the UrbanFM: Inferring Fine-Grained Urban Flows paper on the TaxiBJ-P1 dataset? | MSE, MAE, MAPE |
What metrics were used to measure the VDSR model in the UrbanFM: Inferring Fine-Grained Urban Flows paper on the TaxiBJ-P1 dataset? | MSE, MAE, MAPE |
What metrics were used to measure the SRResNet model in the UrbanFM: Inferring Fine-Grained Urban Flows paper on the TaxiBJ-P1 dataset? | MSE, MAE, MAPE |
What metrics were used to measure the ESPCN model in the UrbanFM: Inferring Fine-Grained Urban Flows paper on the TaxiBJ-P1 dataset? | MSE, MAE, MAPE |
What metrics were used to measure the SRCNN model in the UrbanFM: Inferring Fine-Grained Urban Flows paper on the TaxiBJ-P1 dataset? | MSE, MAE, MAPE |
What metrics were used to measure the HA model in the UrbanFM: Inferring Fine-Grained Urban Flows paper on the TaxiBJ-P1 dataset? | MSE, MAE, MAPE |
What metrics were used to measure the UrbanFM model in the UrbanFM: Inferring Fine-Grained Urban Flows paper on the TaxiBJ-P2 dataset? | MSE , MAE, MAPE |
What metrics were used to measure the UrbanFM-ne model in the UrbanFM: Inferring Fine-Grained Urban Flows paper on the TaxiBJ-P2 dataset? | MSE , MAE, MAPE |
What metrics were used to measure the UrbanFM model in the UrbanFM: Inferring Fine-Grained Urban Flows paper on the TaxiBJ-P4 dataset? | MSE , MAE, MAPE |
What metrics were used to measure the UrbanFM-ne model in the UrbanFM: Inferring Fine-Grained Urban Flows paper on the TaxiBJ-P4 dataset? | MSE , MAE, MAPE |
What metrics were used to measure the UMAEA (w/o surf) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-dbp-fr-en dataset? | Hits@1 |
What metrics were used to measure the MEAformer (w/o surf) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the UMVM-dbp-fr-en dataset? | Hits@1 |
What metrics were used to measure the UMAEA (w/o surf & iter ) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-dbp-fr-en dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-dbp-fr-en dataset? | Hits@1 |
What metrics were used to measure the MEAformer (w/o surf & w/o iter) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the UMVM-dbp-fr-en dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-dbp-fr-en dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf & w/o iter) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-dbp-fr-en dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf & w/o iter) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-dbp-fr-en dataset? | Hits@1 |
What metrics were used to measure the MSNEA (w/o surf) model in the Multi-modal Siamese Network for Entity Alignment paper on the UMVM-dbp-fr-en dataset? | Hits@1 |
What metrics were used to measure the MSNEA (w/o surf & w/o iter) model in the Multi-modal Siamese Network for Entity Alignment paper on the UMVM-dbp-fr-en dataset? | Hits@1 |
What metrics were used to measure the UMAEA (w/o surf) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-oea-en-fr dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-oea-en-fr dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-oea-en-fr dataset? | Hits@1 |
What metrics were used to measure the UMAEA (w/o surf & iter ) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-oea-en-fr dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf & w/o iter) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-oea-en-fr dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf & w/o iter) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-oea-en-fr dataset? | Hits@1 |
What metrics were used to measure the MSNEA (w/o surf) model in the Multi-modal Siamese Network for Entity Alignment paper on the UMVM-oea-en-fr dataset? | Hits@1 |
What metrics were used to measure the MSNEA (w/o surf & w/o iter) model in the Multi-modal Siamese Network for Entity Alignment paper on the UMVM-oea-en-fr dataset? | Hits@1 |
What metrics were used to measure the UMAEA (w/o surf) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-dbp-zh-en dataset? | Hits@1 |
What metrics were used to measure the MEAformer (w/o surf) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the UMVM-dbp-zh-en dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-dbp-zh-en dataset? | Hits@1 |
What metrics were used to measure the UMAEA (w/o surf & iter ) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-dbp-zh-en dataset? | Hits@1 |
What metrics were used to measure the MEAformer (w/o surf & w/o iter) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the UMVM-dbp-zh-en dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-dbp-zh-en dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf & w/o iter) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-dbp-zh-en dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf & w/o iter) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-dbp-zh-en dataset? | Hits@1 |
What metrics were used to measure the MSNEA (w/o surf) model in the Multi-modal Siamese Network for Entity Alignment paper on the UMVM-dbp-zh-en dataset? | Hits@1 |
What metrics were used to measure the MSNEA (w/o surf & w/o iter) model in the Multi-modal Siamese Network for Entity Alignment paper on the UMVM-dbp-zh-en dataset? | Hits@1 |
What metrics were used to measure the UMAEA (w/o surf) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-oea-d-w-v2 dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-oea-d-w-v2 dataset? | Hits@1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.