prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Base Layers model in the Efficient Language Modeling with Sparse all-MLP paper on the COPA dataset? | Accuracy |
What metrics were used to measure the BLOOMZ model in the Crosslingual Generalization through Multitask Finetuning paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the sMLP – deterministic model in the Efficient Language Modeling with Sparse all-MLP paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the GPT-3 model in the Language Models are Few-Shot Learners paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the ResNet-50 model in the Learning Transferable Visual Models From Natural Language Supervision paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the ViT-B/16 model in the Learning Transferable Visual Models From Natural Language Supervision paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the sMLP – deterministic model in the Efficient Language Modeling with Sparse all-MLP paper on the PIQA dataset? | Accuracy |
What metrics were used to measure the GPT-3 model in the Language Models are Few-Shot Learners paper on the PIQA dataset? | Accuracy |
What metrics were used to measure the CLIP(ResNet-50) model in the Learning Transferable Visual Models From Natural Language Supervision paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the CLIP(ViT-B/16) model in the Learning Transferable Visual Models From Natural Language Supervision paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the iVQA dataset? | Accuracy |
What metrics were used to measure the SeViLA model in the Self-Chained Image-Language Model for Video Localization and Question Answering paper on the How2QA dataset? | Accuracy |
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the How2QA dataset? | Accuracy |
What metrics were used to measure the Just Ask model in the Just Ask: Learning to Answer Questions from Millions of Narrated Videos paper on the How2QA dataset? | Accuracy |
What metrics were used to measure the ZSL-KG model in the Zero-Shot Learning with Common Sense Knowledge Graphs paper on the ImageNet dataset? | Top-1 |
What metrics were used to measure the sMLP – deterministic model in the Efficient Language Modeling with Sparse all-MLP paper on the HellaSwag dataset? | Accuracy |
What metrics were used to measure the ZSL-KG model in the Zero-Shot Learning with Common Sense Knowledge Graphs paper on the aPY - 0-Shot dataset? | Top-1 |
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the TGIF-QA dataset? | Accuracy |
What metrics were used to measure the GPT-3 model in the Language Models are Few-Shot Learners paper on the ReCoRD dataset? | Accuracy |
What metrics were used to measure the sMLP – deterministic model in the Efficient Language Modeling with Sparse all-MLP paper on the ReCoRD dataset? | Accuracy |
What metrics were used to measure the SPOT (VAEGAN) model in the Synthetic Sample Selection for Generalized Zero-Shot Learning paper on the SUN Attribute dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the ZSL_TF-VAEGAN model in the Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification paper on the SUN Attribute dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the f-VAEGAN model in the f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning paper on the SUN Attribute dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the DUET (Ours) model in the DUET: Cross-modal Semantic Grounding for Contrastive Zero-shot Learning paper on the SUN Attribute dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the LisGAN model in the Leveraging the Invariant Side of Generative Zero-Shot Learning paper on the SUN Attribute dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the TCN model in the Transferable Contrastive Network for Generalized Zero-Shot Learning paper on the SUN Attribute dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the f-CLSWGAN model in the Feature Generating Networks for Zero-Shot Learning paper on the SUN Attribute dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the Cycle-WGAN model in the Multi-modal Cycle-consistent Generalized Zero-Shot Learning paper on the SUN Attribute dataset? | average top-1 classification accuracy, Accuracy Seen, Accuracy Unseen, H |
What metrics were used to measure the Chatterjee, Dutta et al.[1] Transfer Learning on ResNet-50 91.13 % (50 Char) + 98.42% (Numbers) model in the AKHCRNet: Bengali Handwritten Character Recognition Using Deep Learning paper on the BanglaLekha Isolated Dataset dataset? | Accuracy |
What metrics were used to measure the APCLIP model in the Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains paper on the Office-Home dataset? | Accuracy |
What metrics were used to measure the DFA-ENT model in the Discriminative Feature Alignment: Improving Transferability of Unsupervised Domain Adaptation by Gaussian-guided Latent Alignment paper on the Office-Home dataset? | Accuracy |
What metrics were used to measure the DFA-SAFN model in the Discriminative Feature Alignment: Improving Transferability of Unsupervised Domain Adaptation by Gaussian-guided Latent Alignment paper on the Office-Home dataset? | Accuracy |
What metrics were used to measure the EasyTL model in the Easy Transfer Learning By Exploiting Intra-domain Structures paper on the Office-Home dataset? | Accuracy |
What metrics were used to measure the MEDA model in the Visual Domain Adaptation with Manifold Embedded Distribution Alignment paper on the Office-Home dataset? | Accuracy |
What metrics were used to measure the Random model in the Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm paper on the Amazon Review Polarity dataset? | Accuracy, Structure Aware Intrinsic Dimension |
What metrics were used to measure the BERT-Large model in the Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning paper on the Amazon Review Polarity dataset? | Accuracy, Structure Aware Intrinsic Dimension |
What metrics were used to measure the Physical Access model in the Audio Spoofing Verification using Deep Convolutional Neural Networks by Transfer Learning paper on the KITTI Object Tracking Evaluation 2012 dataset? | EER |
What metrics were used to measure the riadd.aucmedi model in the Multi-Disease Detection in Retinal Imaging based on Ensembling Heterogeneous Deep Learning Models paper on the Retinal Fundus MultiDisease Image Dataset (RFMiD) dataset? | AUROC |
What metrics were used to measure the Co-Tuning model in the Co-Tuning for Transfer Learning paper on the COCO70 dataset? | Accuracy |
What metrics were used to measure the CNN model in the Plant Disease Detection from Images paper on the 100 sleep nights of 8 caregivers dataset? | 10-20% Mask PSNR |
What metrics were used to measure the QDGAT (ensemble) model in the Question Directed Graph Attention Network for Numerical Reasoning over Text paper on the DROP Test dataset? | F1 |
What metrics were used to measure the POET model in the Reasoning Like Program Executors paper on the DROP Test dataset? | F1 |
What metrics were used to measure the PaLM 2 (few-shot) model in the PaLM 2 Technical Report paper on the DROP Test dataset? | F1 |
What metrics were used to measure the BERT+Calculator (ensemble) model in the Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension paper on the DROP Test dataset? | F1 |
What metrics were used to measure the NeRd model in the Neural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension paper on the DROP Test dataset? | F1 |
What metrics were used to measure the GPT-4 (few-shot, k=3) model in the GPT-4 Technical Report paper on the DROP Test dataset? | F1 |
What metrics were used to measure the TASE-BERT model in the A Simple and Effective Model for Answering Multi-span Questions paper on the DROP Test dataset? | F1 |
What metrics were used to measure the MTMSN Large model in the A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning paper on the DROP Test dataset? | F1 |
What metrics were used to measure the GenBERT (+ND+TD) model in the Injecting Numerical Reasoning Skills into Language Models paper on the DROP Test dataset? | F1 |
What metrics were used to measure the NumNet model in the NumNet: Machine Reading Comprehension with Numerical Reasoning paper on the DROP Test dataset? | F1 |
What metrics were used to measure the GPT 3.5 (few-shot, k=3) model in the GPT-4 Technical Report paper on the DROP Test dataset? | F1 |
What metrics were used to measure the NAQA Net model in the DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs paper on the DROP Test dataset? | F1 |
What metrics were used to measure the GPT-3 175B (few-Shot) model in the Language Models are Few-Shot Learners paper on the DROP Test dataset? | F1 |
What metrics were used to measure the BERT model in the DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs paper on the DROP Test dataset? | F1 |
What metrics were used to measure the UnifiedQA model in the UnifiedQA: Crossing Format Boundaries With a Single QA System paper on the CommonsenseQA dataset? | Test Accuracy |
What metrics were used to measure the DeBERTaV3large model in the DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing paper on the SWAG dataset? | Accuracy |
What metrics were used to measure the Bing Chat model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-English dataset? | Accuracy |
What metrics were used to measure the Bard model in the Performance Comparison of Large Language Models on VNHSGE English Dataset: OpenAI ChatGPT, Microsoft Bing Chat, and Google Bard paper on the VNHSGE-English dataset? | Accuracy |
What metrics were used to measure the ChatGPT model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-English dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (finetuned) model in the PaLM: Scaling Language Modeling with Pathways paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the DeBERTa-1.5B model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the FLAN 137B (zero-shot) model in the Finetuned Language Models Are Zero-Shot Learners paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the GPT-3 175B (Few-Shot) model in the Language Models are Few-Shot Learners paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the KELM (finetuning BERT-large based single model) model in the KELM: Knowledge Enhanced Pre-Trained Language Representations with Message Passing on Hierarchical Relational Graphs paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the BERT-large(single model) model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the Neo-6B (QA + WS) model in the Ask Me Anything: A simple strategy for prompting language models paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the N-Grammer model in the N-Grammer: Augmenting Transformers with latent n-grams paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the Neo-6B (few-shot) model in the Ask Me Anything: A simple strategy for prompting language models paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the AlexaTM 20B model in the AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the Neo-6B (QA) model in the Ask Me Anything: A simple strategy for prompting language models paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the MultiRC dataset? | F1, EM |
What metrics were used to measure the Gated-Attention Reader model in the CliCR: A Dataset of Clinical Case Reports for Machine Reading Comprehension paper on the CliCR dataset? | F1 |
What metrics were used to measure the Stanford Attentive Reader model in the CliCR: A Dataset of Clinical Case Reports for Machine Reading Comprehension paper on the CliCR dataset? | F1 |
What metrics were used to measure the ChatGPT model in the Can ChatGPT Replace Traditional KBQA Models? An In-depth Analysis of the Question Answering Performance of the GPT LLM Family paper on the KQA Pro dataset? | Accuracy |
What metrics were used to measure the BioLinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the BLURB dataset? | Accuracy |
What metrics were used to measure the BioLinkBERT (base) model in the LinkBERT: Pretraining Language Models with Document Links paper on the BLURB dataset? | Accuracy |
What metrics were used to measure the PubMedBERT (uncased; abstracts) model in the Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing paper on the BLURB dataset? | Accuracy |
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the OBQA dataset? | Accuracy |
What metrics were used to measure the LLaMA 65B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the OBQA dataset? | Accuracy |
What metrics were used to measure the LLaMA 33B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the OBQA dataset? | Accuracy |
What metrics were used to measure the GPT-3 175B (zero-shot) model in the Language Models are Few-Shot Learners paper on the OBQA dataset? | Accuracy |
What metrics were used to measure the LLaMA 7B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the OBQA dataset? | Accuracy |
What metrics were used to measure the LLaMA 13B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the OBQA dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (zero-shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the OBQA dataset? | Accuracy |
What metrics were used to measure the PaLM 62B (zero-shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the OBQA dataset? | Accuracy |
What metrics were used to measure the TagOp model in the TAT-QA: A Question Answering Benchmark on a Hybrid of Tabular and Textual Content in Finance paper on the TAT-QA dataset? | Exact Match (EM) |
What metrics were used to measure the Memory Networks (ensemble) model in the Large-scale Simple Question Answering with Memory Networks paper on the SimpleQuestions dataset? | F1 |
What metrics were used to measure the RBG model in the Read before Generate! Faithful Long Form Question Answering with Machine Reading paper on the KILT: ELI5 dataset? | Rouge-L, F1 |
What metrics were used to measure the KID model in the Knowledge Infused Decoding paper on the KILT: ELI5 dataset? | Rouge-L, F1 |
What metrics were used to measure the c-REALM model in the Hurdles to Progress in Long-form Question Answering paper on the KILT: ELI5 dataset? | Rouge-L, F1 |
What metrics were used to measure the EMAT model in the An Efficient Memory-Augmented Transformer for Knowledge-Intensive NLP Tasks paper on the KILT: ELI5 dataset? | Rouge-L, F1 |
What metrics were used to measure the T5-base model in the KILT: a Benchmark for Knowledge Intensive Language Tasks paper on the KILT: ELI5 dataset? | Rouge-L, F1 |
What metrics were used to measure the BART+DPR model in the KILT: a Benchmark for Knowledge Intensive Language Tasks paper on the KILT: ELI5 dataset? | Rouge-L, F1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.