prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the DecaProp model in the Densely Connected Attention Propagation for Reading Comprehension paper on the NewsQA dataset? | EM, F1 |
What metrics were used to measure the MINIMAL(Dyn) model in the Efficient and Robust Question Answering from Minimal Context over Documents paper on the NewsQA dataset? | EM, F1 |
What metrics were used to measure the AMANDA model in the A Question-Focused Multi-Factor Attention Network for Question Answering paper on the NewsQA dataset? | EM, F1 |
What metrics were used to measure the FastQAExt model in the Making Neural QA as Simple as Possible but not Simpler paper on the NewsQA dataset? | EM, F1 |
What metrics were used to measure the SpanBERT model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the NewsQA dataset? | EM, F1 |
What metrics were used to measure the LinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the NewsQA dataset? | EM, F1 |
What metrics were used to measure the DyREX model in the DyREx: Dynamic Query Representation for Extractive Question Answering paper on the NewsQA dataset? | EM, F1 |
What metrics were used to measure the WebQA model in the Evaluating Semantic Parsing against a Simple Web-based Question Answering Model paper on the COMPLEXQUESTIONS dataset? | F1 |
What metrics were used to measure the MHQA model in the Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks paper on the COMPLEXQUESTIONS dataset? | F1 |
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the Reading Strategies Model model in the Improving Machine Reading Comprehension with General Reading Strategies paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the Neo-6B (QA + WS) model in the Ask Me Anything: A simple strategy for prompting language models paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the GPT-3 175B (Few-Shot) model in the Language Models are Few-Shot Learners paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the Finetuned Transformer LM model in the Improving Language Understanding by Generative Pre-Training paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the OPT-175B model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the SparseGPT (175B, 50% Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the Memory chains and semantic supervision model in the UNIMELB at SemEval-2016 Tasks 4A and 4B: An Ensemble of Neural Networks and a Word2Vec Based Model for Sentiment Classification paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the Hidden Coherence Model model in the Story Comprehension for Predicting What Happens Next paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the SparseGPT (175B, 4:8 Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the val-LS-skip model in the A Simple and Effective Approach to the Story Cloze Test paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the Neo-6B (QA) model in the Ask Me Anything: A simple strategy for prompting language models paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the SparseGPT (175B, 2:4 Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the Neo-6B (few-shot) model in the Ask Me Anything: A simple strategy for prompting language models paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the OPT-175B (50% Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the Story Cloze dataset? | Accuracy |
What metrics were used to measure the BioGPT-Large(1.5B) model in the BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the Med-PaLM 2 (5-shot) model in the Towards Expert-Level Medical Question Answering with Large Language Models paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the Flan-PaLM (540B, Few-shot) model in the Large Language Models Encode Clinical Knowledge paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the BioGPT(345M) model in the BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the Codex 5-shot CoT model in the Can large language models reason about medical questions? paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the Human Performance (single annotator) model in the PubMedQA: A Dataset for Biomedical Research Question Answering paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the GAL 120B (zero-shot) model in the Galactica: A Large Language Model for Science paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the Flan-PaLM (62B, Few-shot) model in the Large Language Models Encode Clinical Knowledge paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the BioMedGPT-10B model in the BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the Flan-PaLM (540B, SC) model in the Large Language Models Encode Clinical Knowledge paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the Med-PaLM 2 (ER) model in the Towards Expert-Level Medical Question Answering with Large Language Models paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the Med-PaLM 2 (CoT + SC) model in the Towards Expert-Level Medical Question Answering with Large Language Models paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the BLOOM (zero-shot) model in the Galactica: A Large Language Model for Science paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the BioLinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the BioLinkBERT (base) model in the LinkBERT: Pretraining Language Models with Document Links paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the OPT (zero-shot) model in the Galactica: A Large Language Model for Science paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the Flan-PaLM (8B, Few-shot) model in the Large Language Models Encode Clinical Knowledge paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the BioELECTRA uncased model in the BioELECTRA:Pretrained Biomedical text Encoder using Discriminators paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the PaLM (62B, Few-shot) model in the Large Language Models Encode Clinical Knowledge paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the PubMedBERT uncased model in the Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the PaLM (540B, Few-shot) model in the Large Language Models Encode Clinical Knowledge paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the PaLM (8B, Few-shot) model in the Large Language Models Encode Clinical Knowledge paper on the PubMedQA dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (Self Improvement, Self Consistency) model in the Large Language Models Can Self-Improve paper on the DROP dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (Self Consistency) model in the Large Language Models Can Self-Improve paper on the DROP dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (Self Improvement, CoT Prompting) model in the Large Language Models Can Self-Improve paper on the DROP dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (Self Improvement, Standard-Prompting) model in the Large Language Models Can Self-Improve paper on the DROP dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (CoT Prompting) model in the Large Language Models Can Self-Improve paper on the DROP dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (Standard-Prompting) model in the Large Language Models Can Self-Improve paper on the DROP dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (finetuned) model in the PaLM: Scaling Language Modeling with Pathways paper on the COPA dataset? | Accuracy |
What metrics were used to measure the DeBERTa-Ensemble model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the COPA dataset? | Accuracy |
What metrics were used to measure the DeBERTa-1.5B model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the COPA dataset? | Accuracy |
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the COPA dataset? | Accuracy |
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the COPA dataset? | Accuracy |
What metrics were used to measure the GPT-3 175B (Few-Shot) model in the Language Models are Few-Shot Learners paper on the COPA dataset? | Accuracy |
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the COPA dataset? | Accuracy |
What metrics were used to measure the T0-3B + CoT Fine-Tuning model in the The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning paper on the COPA dataset? | Accuracy |
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the COPA dataset? | Accuracy |
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the COPA dataset? | Accuracy |
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the COPA dataset? | Accuracy |
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the COPA dataset? | Accuracy |
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the COPA dataset? | Accuracy |
What metrics were used to measure the Neo-6B (QA + WS) model in the Ask Me Anything: A simple strategy for prompting language models paper on the COPA dataset? | Accuracy |
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the COPA dataset? | Accuracy |
What metrics were used to measure the KELM (finetuning BERT-large based single model) model in the KELM: Knowledge Enhanced Pre-Trained Language Representations with Message Passing on Hierarchical Relational Graphs paper on the COPA dataset? | Accuracy |
What metrics were used to measure the AlexaTM 20B model in the AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model paper on the COPA dataset? | Accuracy |
What metrics were used to measure the Neo-6B (few-shot) model in the Ask Me Anything: A simple strategy for prompting language models paper on the COPA dataset? | Accuracy |
What metrics were used to measure the N-Grammer model in the N-Grammer: Augmenting Transformers with latent n-grams paper on the COPA dataset? | Accuracy |
What metrics were used to measure the Neo-6B (QA) model in the Ask Me Anything: A simple strategy for prompting language models paper on the COPA dataset? | Accuracy |
What metrics were used to measure the Weakly Supervised Embeddings model in the Open Question Answering with Weakly Supervised Embedding Models paper on the Reverb dataset? | Accuracy |
What metrics were used to measure the Memory Networks (ensemble) model in the Large-scale Simple Question Answering with Memory Networks paper on the Reverb dataset? | Accuracy |
What metrics were used to measure the BART fine-tuned on FairytaleQA model in the Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension paper on the FairytaleQA dataset? | F1, Rouge-L |
What metrics were used to measure the BART fine-tuned on NarrativeQA model in the Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension paper on the FairytaleQA dataset? | F1, Rouge-L |
What metrics were used to measure the BART model in the Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension paper on the FairytaleQA dataset? | F1, Rouge-L |
What metrics were used to measure the DistilBERT model in the Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension paper on the FairytaleQA dataset? | F1, Rouge-L |
What metrics were used to measure the albert-xxlarge + APN(baseline) model in the paper on the SCDE dataset? | BA, PA, DE |
What metrics were used to measure the bert-large-uncased + APN(baseline) model in the paper on the SCDE dataset? | BA, PA, DE |
What metrics were used to measure the bert-large-uncased + APN model in the SCDE: Sentence Cloze Dataset with High Quality Distractors From Examinations paper on the SCDE dataset? | BA, PA, DE |
What metrics were used to measure the Attentive LSTM model in the Neural Variational Inference for Text Processing paper on the QASent dataset? | MAP, MRR |
What metrics were used to measure the LSTM (lexical overlap + dist output) model in the Neural Variational Inference for Text Processing paper on the QASent dataset? | MAP, MRR |
What metrics were used to measure the Bigram-CNN (lexical overlap + dist output) model in the Deep Learning for Answer Sentence Selection paper on the QASent dataset? | MAP, MRR |
What metrics were used to measure the Paragraph vector (lexical overlap + dist output) model in the Distributed Representations of Sentences and Documents paper on the QASent dataset? | MAP, MRR |
What metrics were used to measure the LSTM model in the Neural Variational Inference for Text Processing paper on the QASent dataset? | MAP, MRR |
What metrics were used to measure the Bigram-CNN model in the Deep Learning for Answer Sentence Selection paper on the QASent dataset? | MAP, MRR |
What metrics were used to measure the Paragraph vector model in the Distributed Representations of Sentences and Documents paper on the QASent dataset? | MAP, MRR |
What metrics were used to measure the STM model in the Self-Attentive Associative Memory paper on the bAbi dataset? | Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate |
What metrics were used to measure the QRN model in the Query-Reduction Networks for Question Answering paper on the bAbi dataset? | Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate |
What metrics were used to measure the EntNet model in the Tracking the World State with Recurrent Entity Networks paper on the bAbi dataset? | Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate |
What metrics were used to measure the H-Mem model in the H-Mem: Harnessing synaptic plasticity with Hebbian Memory Networks paper on the bAbi dataset? | Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate |
What metrics were used to measure the DMN+ model in the Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes paper on the bAbi dataset? | Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate |
What metrics were used to measure the End-To-End Memory Networks model in the End-To-End Memory Networks paper on the bAbi dataset? | Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate |
What metrics were used to measure the ours model in the Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity paper on the bAbi dataset? | Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate |
What metrics were used to measure the RUM model in the Rotational Unit of Memory paper on the bAbi dataset? | Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate |
What metrics were used to measure the GORU model in the Gated Orthogonal Recurrent Units: On Learning to Forget paper on the bAbi dataset? | Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.