prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the HAT (Encoder) model in the Hierarchical Learning for Generation with Long Source Sequences paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the XLNet model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the PaLM 540B (zero-shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the LLaMA 65B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the PaLM 62B (zero-shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the LLaMA 33B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the LLaMA 13B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the LLaMA 7B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the GPT-3 175B (zero-shot) model in the Language Models are Few-Shot Learners paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the PaLM 8B (zero-shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the NAL model in the ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract Meaning paper on the ReCAM dataset? | Accuracy |
What metrics were used to measure the CoT-T5-11B (1024 Shot) model in the The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning paper on the MedNLI dataset? | Accuracy |
What metrics were used to measure the CoT-T5-11B (1024 Shot) model in the The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the DART model in the Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners paper on the MR dataset? | Acc |
What metrics were used to measure the Variational Prompt Tuning model in the Bayesian Prompt Learning for Image-Language Model Generalization paper on the EuroSAT dataset? | Harmonic mean |
What metrics were used to measure the CoT-T5-11B (1024 Shot) model in the The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning paper on the CaseHOLD dataset? | Accuracy |
What metrics were used to measure the CovidExpert model in the CovidExpert: A Triplet Siamese Neural Network framework for the detection of COVID-19 paper on the Large COVID-19 CT scan slice dataset dataset? | AUC-ROC, Accuracy , Macro F1, Macro Precision, Macro Recall, Micro Precision, Specificity |
What metrics were used to measure the EASY (transductive) model in the EASY: Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple Ingredients paper on the Mini-Imagenet 5-way (1-shot) dataset? | Accuracy, 5 way 1~2 shot |
What metrics were used to measure the HyperShot model in the HyperShot: Few-Shot Learning by Kernel HyperNetworks paper on the Mini-Imagenet 5-way (1-shot) dataset? | Accuracy, 5 way 1~2 shot |
What metrics were used to measure the HCTransformers model in the Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot Learning paper on the Mini-Imagenet 5-way (1-shot) dataset? | Accuracy, 5 way 1~2 shot |
What metrics were used to measure the Variational Prompt Tuning model in the Bayesian Prompt Learning for Image-Language Model Generalization paper on the FGVC Aircraft dataset? | Harmonic mean |
What metrics were used to measure the DART model in the Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners paper on the CR dataset? | Acc |
What metrics were used to measure the Variational Prompt Tuning model in the Bayesian Prompt Learning for Image-Language Model Generalization paper on the UCF101 dataset? | Harmonic mean |
What metrics were used to measure the BGNN model in the paper on the Mini-ImageNet - 5-Shot Learning dataset? | Accuracy |
What metrics were used to measure the TIM-GD model in the Transductive Information Maximization For Few-Shot Learning paper on the Mini-ImageNet - 5-Shot Learning dataset? | Accuracy |
What metrics were used to measure the Variational Prompt Tuning model in the Bayesian Prompt Learning for Image-Language Model Generalization paper on the StanforCars dataset? | Harmonic mean |
What metrics were used to measure the DART model in the Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners paper on the MRPC dataset? | F1-score |
What metrics were used to measure the HCTransformers model in the Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot Learning paper on the Mini-ImageNet - 1-Shot Learning dataset? | Acc |
What metrics were used to measure the DPGN model in the DPGN: Distribution Propagation Graph Network for Few-shot Learning paper on the Mini-ImageNet - 1-Shot Learning dataset? | Acc |
What metrics were used to measure the Variational Prompt Tuning model in the Bayesian Prompt Learning for Image-Language Model Generalization paper on the DTD dataset? | Harmonic mean |
What metrics were used to measure the Variational Prompt Tuning model in the Bayesian Prompt Learning for Image-Language Model Generalization paper on the OxfordPets dataset? | Harmonic mean |
What metrics were used to measure the Variational Prompt Tuning model in the Bayesian Prompt Learning for Image-Language Model Generalization paper on the Caltech101 dataset? | Harmonic mean |
What metrics were used to measure the DART model in the Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners paper on the GLUE QQP dataset? | F1-score |
What metrics were used to measure the Variational Prompt Tuning model in the Bayesian Prompt Learning for Image-Language Model Generalization paper on the food101 dataset? | Harmonic mean |
What metrics were used to measure the Variational Prompt Tuning model in the Bayesian Prompt Learning for Image-Language Model Generalization paper on the Flowers-102 dataset? | Harmonic mean |
What metrics were used to measure the Variational Prompt Tuning model in the Bayesian Prompt Learning for Image-Language Model Generalization paper on the SUN397 dataset? | Harmonic mean |
What metrics were used to measure the DART model in the Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners paper on the SST-2 Binary classification dataset? | Acc |
What metrics were used to measure the Variational Prompt Tuning model in the Bayesian Prompt Learning for Image-Language Model Generalization paper on the ImageNet dataset? | Harmonic mean |
What metrics were used to measure the ResAttArg model in the Multi-Task Attentive Residual Networks for Argument Mining paper on the DRI Corpus dataset? | Macro F1 |
What metrics were used to measure the BERT model in the Mining Discourse Markers for Unsupervised Sentence Representation Learning paper on the Discovery Dataset dataset? | 1:1 Accuracy |
What metrics were used to measure the BRCNN model in the Bidirectional Recurrent Convolutional Neural Network for Relation Classification paper on the SemEval 2010 Task 8 dataset? | F1 |
What metrics were used to measure the DRNNs model in the Improved Relation Classification by Deep Recurrent Neural Networks with Data Augmentation paper on the SemEval 2010 Task 8 dataset? | F1 |
What metrics were used to measure the depLCNN + NS model in the Semantic Relation Classification via Convolutional Neural Networks with Simple Negative Sampling paper on the SemEval 2010 Task 8 dataset? | F1 |
What metrics were used to measure the SDP-LSTM model in the Classifying Relations via Long Short Term Memory Networks along Shortest Dependency Path paper on the SemEval 2010 Task 8 dataset? | F1 |
What metrics were used to measure the DepNN model in the A Dependency-Based Neural Network for Relation Classification paper on the SemEval 2010 Task 8 dataset? | F1 |
What metrics were used to measure the MVRNN model in the Semantic Compositionality through Recursive Matrix-Vector Spaces paper on the SemEval 2010 Task 8 dataset? | F1 |
What metrics were used to measure the DeepStruct multi-task w/ finetune model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the FewRel dataset? | F1 (10-way 1-shot), F1 (10-way 5-shot), F1 (5-way 1-shot), F1 (5-way 5-shot, F1 |
What metrics were used to measure the DeepStruct multi-task model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the FewRel dataset? | F1 (10-way 1-shot), F1 (10-way 5-shot), F1 (5-way 1-shot), F1 (5-way 5-shot, F1 |
What metrics were used to measure the Deepstruct zero-shot model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the FewRel dataset? | F1 (10-way 1-shot), F1 (10-way 5-shot), F1 (5-way 1-shot), F1 (5-way 5-shot, F1 |
What metrics were used to measure the DeepEx (zero-shot top-1) model in the Zero-Shot Information Extraction as a Unified Text-to-Triple Translation paper on the FewRel dataset? | F1 (10-way 1-shot), F1 (10-way 5-shot), F1 (5-way 1-shot), F1 (5-way 5-shot, F1 |
What metrics were used to measure the DeepEx (zero-shot top-10) model in the Zero-Shot Information Extraction as a Unified Text-to-Triple Translation paper on the FewRel dataset? | F1 (10-way 1-shot), F1 (10-way 5-shot), F1 (5-way 1-shot), F1 (5-way 5-shot, F1 |
What metrics were used to measure the ResAttArg model in the Multi-Task Attentive Residual Networks for Argument Mining paper on the CDCP dataset? | Macro F1 |
What metrics were used to measure the ResAttArg model in the Multi-Task Attentive Residual Networks for Argument Mining paper on the AbstRCT - Neoplasm dataset? | Macro F1 |
What metrics were used to measure the SCS-EERE model in the Selecting Optimal Context Sentences for Event-Event Relation Extraction paper on the MATRES dataset? | F1 |
What metrics were used to measure the Deepstruct zero-shot model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the TACRED dataset? | F1 |
What metrics were used to measure the DeepEx (zero-shot top-1) model in the Zero-Shot Information Extraction as a Unified Text-to-Triple Translation paper on the TACRED dataset? | F1 |
What metrics were used to measure the TANL (multi-task) model in the Structured Prediction as Translation between Augmented Natural Languages paper on the TACRED dataset? | F1 |
What metrics were used to measure the TANL model in the Structured Prediction as Translation between Augmented Natural Languages paper on the TACRED dataset? | F1 |
What metrics were used to measure the DeepStruct multi-task model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the TACRED dataset? | F1 |
What metrics were used to measure the DeepEx (zero-shot top-10) model in the Zero-Shot Information Extraction as a Unified Text-to-Triple Translation paper on the TACRED dataset? | F1 |
What metrics were used to measure the DeepStruct multi-task w/ finetune model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the TACRED dataset? | F1 |
What metrics were used to measure the GPT-3 model in the Language Models are Few-Shot Learners paper on the HellaSwag dataset? | Accuracy |
What metrics were used to measure the sMLP – deterministic model in the Efficient Language Modeling with Sparse all-MLP paper on the Winogrande dataset? | Accuracy |
What metrics were used to measure the SPOT model in the Synthetic Sample Selection for Generalized Zero-Shot Learning paper on the Oxford 102 Flower dataset? | average top-1 classification accuracy |
What metrics were used to measure the ZSL_TF-VAEGAN model in the Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification paper on the Oxford 102 Flower dataset? | average top-1 classification accuracy |
What metrics were used to measure the HiTeA model in the HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training paper on the MSRVTT-QA dataset? | Accuracy |
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the MSRVTT-QA dataset? | Accuracy |
What metrics were used to measure the ZSL-KG model in the Zero-Shot Learning with Common Sense Knowledge Graphs paper on the SNIPS dataset? | Accuracy |
What metrics were used to measure the GPT-3 model in the Language Models are Few-Shot Learners paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the zsl_ADA model in the A Generative Framework for Zero-Shot Learning with Adversarial Domain Adaptation paper on the CUB-200 - 0-Shot Learning dataset? | Average Per-Class Accuracy |
What metrics were used to measure the HiTeA model in the HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training paper on the MSVD-QA dataset? | Accuracy |
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the MSVD-QA dataset? | Accuracy |
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the TVQA dataset? | Accuracy |
What metrics were used to measure the SeViLA model in the Self-Chained Image-Language Model for Video Localization and Question Answering paper on the TVQA dataset? | Accuracy |
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the ActivityNet-QA dataset? | Accuracy |
What metrics were used to measure the Just Ask model in the Just Ask: Learning to Answer Questions from Millions of Narrated Videos paper on the ActivityNet-QA dataset? | Accuracy |
What metrics were used to measure the CZSL model in the LOCL: Learning Object-Attribute Composition using Localization paper on the MIT-States dataset? | A-acc |
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the LSMDC dataset? | Accuracy |
What metrics were used to measure the DUET (Ours) model in the DUET: Cross-modal Semantic Grounding for Contrastive Zero-shot Learning paper on the CUB-200-2011 dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the Composer model in the Compositional Fine-Grained Low-Shot Learning paper on the CUB-200-2011 dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the ZSL_TF-VAEGAN model in the Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification paper on the CUB-200-2011 dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the SPOT model in the Synthetic Sample Selection for Generalized Zero-Shot Learning paper on the CUB-200-2011 dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the f-VAEGAN-D2 model in the f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning paper on the CUB-200-2011 dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the TCN model in the Transferable Contrastive Network for Generalized Zero-Shot Learning paper on the CUB-200-2011 dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the LisGAN model in the Leveraging the Invariant Side of Generative Zero-Shot Learning paper on the CUB-200-2011 dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the Cycle-WGAN model in the Multi-modal Cycle-consistent Generalized Zero-Shot Learning paper on the CUB-200-2011 dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the f-CLSWGAN model in the Feature Generating Networks for Zero-Shot Learning paper on the CUB-200-2011 dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the ZSL-KG model in the Zero-Shot Learning with Common Sense Knowledge Graphs paper on the AwA2 dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the ZSL_TF-VAEGAN model in the Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification paper on the AwA2 dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the DUET (Ours) model in the DUET: Cross-modal Semantic Grounding for Contrastive Zero-shot Learning paper on the AwA2 dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the ZS3Net model in the Zero-Shot Semantic Segmentation paper on the PASCAL Context dataset? | k=10 mIOU |
What metrics were used to measure the sMLP – deterministic model in the Efficient Language Modeling with Sparse all-MLP paper on the COPA dataset? | Accuracy |
What metrics were used to measure the Gshard model in the Efficient Language Modeling with Sparse all-MLP paper on the COPA dataset? | Accuracy |
What metrics were used to measure the Switch Transformer model in the Efficient Language Modeling with Sparse all-MLP paper on the COPA dataset? | Accuracy |
What metrics were used to measure the GPT-3 model in the Language Models are Few-Shot Learners paper on the COPA dataset? | Accuracy |
What metrics were used to measure the HASH Layers model in the Efficient Language Modeling with Sparse all-MLP paper on the COPA dataset? | Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.