prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the RAG model in the KILT: a Benchmark for Knowledge Intensive Language Tasks paper on the KILT: ELI5 dataset? | Rouge-L, F1 |
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the ARC-e dataset? | Accuracy |
What metrics were used to measure the SchizzoBioBERT model in the Question-Answering Model for Schizophrenia Symptoms and Their Impact on Daily Life using Mental Health Forums Data paper on the SchizzoSQUAD dataset? | Average F1, Averaged Precision |
What metrics were used to measure the HyperQA model in the Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering paper on the SemEvalCQA dataset? | P@1, MAP |
What metrics were used to measure the ConvKN model in the ConvKN at SemEval-2016 Task 3: Answer and Question Selection for Question Answering on Arabic and English Fora paper on the SemEvalCQA dataset? | P@1, MAP |
What metrics were used to measure the AP-CNN model in the Attentive Pooling Networks paper on the SemEvalCQA dataset? | P@1, MAP |
What metrics were used to measure the ARC-II model in the Convolutional Neural Network Architectures for Matching Natural Language Sentences paper on the SemEvalCQA dataset? | P@1, MAP |
What metrics were used to measure the Kelp model in the KeLP at SemEval-2016 Task 3: Learning Semantic Relations between Questions and Answers paper on the SemEvalCQA dataset? | P@1, MAP |
What metrics were used to measure the UnitedQA model in the UnitedQA: A Hybrid Approach for Open Domain Question Answering paper on the EfficientQA dev dataset? | Accuracy |
What metrics were used to measure the DeepPavlov RuBERT model in the SberQuAD -- Russian Reading Comprehension Dataset: Description and Analysis paper on the SberQuAD dataset? | EM, F1 |
What metrics were used to measure the DeepPavlov multilingual BERT model in the SberQuAD -- Russian Reading Comprehension Dataset: Description and Analysis paper on the SberQuAD dataset? | EM, F1 |
What metrics were used to measure the DeepPavlov R-Net model in the SberQuAD -- Russian Reading Comprehension Dataset: Description and Analysis paper on the SberQuAD dataset? | EM, F1 |
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the ARC-c dataset? | Accuracy |
What metrics were used to measure the Longformer Encoder Decoder (base) model in the A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers paper on the QASPER dataset? | Token F1 |
What metrics were used to measure the FinQANet (RoBERTa-large) model in the ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational Finance Question Answering paper on the ConvFinQA dataset? | Execution Accuracy |
What metrics were used to measure the TOME-2 model in the Mention Memory: incorporating textual knowledge into Transformers through entity mention attention paper on the ComplexWebQuestions dataset? | EM |
What metrics were used to measure the TP-Transformer model in the Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving paper on the Mathematics Dataset dataset? | Accuracy |
What metrics were used to measure the Transformer model in the Analysing Mathematical Reasoning Abilities of Neural Models paper on the Mathematics Dataset dataset? | Accuracy |
What metrics were used to measure the LSTM model in the Analysing Mathematical Reasoning Abilities of Neural Models paper on the Mathematics Dataset dataset? | Accuracy |
What metrics were used to measure the XLNet-large model in the ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning paper on the ReClor dataset? | Accuracy, Accuracy (easy), Accuracy (hard) |
What metrics were used to measure the RoBERTa-large model in the ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning paper on the ReClor dataset? | Accuracy, Accuracy (easy), Accuracy (hard) |
What metrics were used to measure the BERT-large model in the ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning paper on the ReClor dataset? | Accuracy, Accuracy (easy), Accuracy (hard) |
What metrics were used to measure the Bing Chat model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-Civic dataset? | Accuracy |
What metrics were used to measure the ChatGPT model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-Civic dataset? | Accuracy |
What metrics were used to measure the Bing Chat model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE Mathematics dataset? | Accuracy |
What metrics were used to measure the ChatGPT model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE Mathematics dataset? | Accuracy |
What metrics were used to measure the NSE model in the Gated-Attention Readers for Text Comprehension paper on the Children's Book Test dataset? | Accuracy-CN, Accuracy-NE |
What metrics were used to measure the GA + feature + fix L(w) model in the Gated-Attention Readers for Text Comprehension paper on the Children's Book Test dataset? | Accuracy-CN, Accuracy-NE |
What metrics were used to measure the AoA reader model in the Attention-over-Attention Neural Networks for Reading Comprehension paper on the Children's Book Test dataset? | Accuracy-CN, Accuracy-NE |
What metrics were used to measure the GA reader model in the Gated-Attention Readers for Text Comprehension paper on the Children's Book Test dataset? | Accuracy-CN, Accuracy-NE |
What metrics were used to measure the AS reader (avg) model in the Text Understanding with the Attention Sum Reader Network paper on the Children's Book Test dataset? | Accuracy-CN, Accuracy-NE |
What metrics were used to measure the AS reader (greedy) model in the Text Understanding with the Attention Sum Reader Network paper on the Children's Book Test dataset? | Accuracy-CN, Accuracy-NE |
What metrics were used to measure the EpiReader model in the Natural Language Comprehension with the EpiReader paper on the Children's Book Test dataset? | Accuracy-CN, Accuracy-NE |
What metrics were used to measure the AIA model in the Iterative Alternating Neural Attention for Machine Reading paper on the Children's Book Test dataset? | Accuracy-CN, Accuracy-NE |
What metrics were used to measure the Fusion Retriever+ETC model in the Open Question Answering over Tables and Text paper on the OTT-QA dataset? | ANS-EM |
What metrics were used to measure the CARP model in the Reasoning over Hybrid Chain for Table-and-Text Open Domain QA paper on the OTT-QA dataset? | ANS-EM |
What metrics were used to measure the DensePhrases model in the Learning Dense Representations of Phrases at Scale paper on the Natural Questions (long) dataset? | F1, EM |
What metrics were used to measure the Cluster-Former (#C=512) model in the Cluster-Former: Clustering-based Sparse Transformer for Long-Range Dependency Encoding paper on the Natural Questions (long) dataset? | F1, EM |
What metrics were used to measure the Locality-Sensitive Hashing model in the Reformer: The Efficient Transformer paper on the Natural Questions (long) dataset? | F1, EM |
What metrics were used to measure the Sparse Attention model in the Generating Long Sequences with Sparse Transformers paper on the Natural Questions (long) dataset? | F1, EM |
What metrics were used to measure the BERTwwm + SQuAD 2 model in the Frustratingly Easy Natural Question Answering paper on the Natural Questions (long) dataset? | F1, EM |
What metrics were used to measure the BERTjoint model in the A BERT Baseline for the Natural Questions paper on the Natural Questions (long) dataset? | F1, EM |
What metrics were used to measure the DecAtt + DocReader model in the Natural Questions: a Benchmark for Question Answering Research paper on the Natural Questions (long) dataset? | F1, EM |
What metrics were used to measure the DrQA model in the Reading Wikipedia to Answer Open-Domain Questions paper on the Natural Questions (long) dataset? | F1, EM |
What metrics were used to measure the Human benchmark model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the CheGeKa dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Large model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the CheGeKa dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Medium model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the CheGeKa dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Small model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the CheGeKa dataset? | Accuracy |
What metrics were used to measure the XLNet model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the RACE dataset? | RACE-m, RACE-h, RACE |
What metrics were used to measure the OCN_large model in the Option Comparison Network for Multiple-choice Reading Comprehension paper on the RACE dataset? | RACE-m, RACE-h, RACE |
What metrics were used to measure the DCMN_large model in the Dual Co-Matching Network for Multi-choice Reading Comprehension paper on the RACE dataset? | RACE-m, RACE-h, RACE |
What metrics were used to measure the Finetuned Transformer LM model in the Improving Language Understanding by Generative Pre-Training paper on the RACE dataset? | RACE-m, RACE-h, RACE |
What metrics were used to measure the BiAttention MRU model in the Multi-range Reasoning for Machine Comprehension paper on the RACE dataset? | RACE-m, RACE-h, RACE |
What metrics were used to measure the GPT-3 175B (Few-Shot) model in the Language Models are Few-Shot Learners paper on the RACE dataset? | RACE-m, RACE-h, RACE |
What metrics were used to measure the Bing Chat model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-History dataset? | Accuracy |
What metrics were used to measure the ChatGPT model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-History dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (Self Improvement, Self Consistency) model in the Large Language Models Can Self-Improve paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (Self Improvement, CoT Prompting) model in the Large Language Models Can Self-Improve paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (Self Improvement, Standard-Prompting) model in the Large Language Models Can Self-Improve paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (Self Consistency) model in the Large Language Models Can Self-Improve paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the GrapeQA: PEGA+CANP model in the GrapeQA: GRaph Augmentation and Pruning to Enhance Question-Answering paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (CoT Prompting) model in the Large Language Models Can Self-Improve paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (Standard-Prompting) model in the Large Language Models Can Self-Improve paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the QA-GNN model in the QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the GrapeQA: PEGA model in the GrapeQA: GRaph Augmentation and Pruning to Enhance Question-Answering paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the Careful Selection model in the Careful Selection of Knowledge to solve Open Book Question Answering paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the GrapeQA: CANP model in the GrapeQA: GRaph Augmentation and Pruning to Enhance Question-Answering paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the GPT-3 175B (Few-Shot) model in the Language Models are Few-Shot Learners paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the OpenBookQA dataset? | Accuracy |
What metrics were used to measure the multimodal+LXMERT+ConstrainedMaxPooling model in the Latent Alignment of Procedural Concepts in Multimodal Recipes paper on the RecipeQA dataset? | Accuracy |
What metrics were used to measure the BigBird-etc model in the Big Bird: Transformers for Longer Sequences paper on the WikiHop dataset? | Test |
What metrics were used to measure the Longformer-large model in the Longformer: The Long-Document Transformer paper on the WikiHop dataset? | Test |
What metrics were used to measure the LUKE-Graph model in the LUKE-Graph: A Transformer-based Approach with Gated Relational Graph Attention for Cloze-style Reading Comprehension paper on the WikiHop dataset? | Test |
What metrics were used to measure the MultiHop (Chen et al., [2019a]) model in the Multi-hop Question Answering via Reasoning Chains paper on the WikiHop dataset? | Test |
What metrics were used to measure the CFC model in the Coarse-grain Fine-grain Coattention Network for Multi-evidence Question Answering paper on the WikiHop dataset? | Test |
What metrics were used to measure the MHQA model in the Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks paper on the WikiHop dataset? | Test |
What metrics were used to measure the Coref-GRU model in the Neural Models for Reasoning over Multiple Mentions using Coreference paper on the WikiHop dataset? | Test |
What metrics were used to measure the MHPGM + NOIC model in the Commonsense for Generative Multi-Hop Question Answering Tasks paper on the WikiHop dataset? | Test |
What metrics were used to measure the BiDAF model in the Constructing Datasets for Multi-hop Reading Comprehension Across Documents paper on the WikiHop dataset? | Test |
What metrics were used to measure the LinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the MRQA dataset? | Average F1 |
What metrics were used to measure the BERT (large) model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the MRQA dataset? | Average F1 |
What metrics were used to measure the IR Baseline model in the Tell Me Why: Using Question Answering as Distant Supervision for Answer Justification paper on the AI2 Kaggle Dataset dataset? | P@1 |
What metrics were used to measure the Our Approach w/o IR model in the Tell Me Why: Using Question Answering as Distant Supervision for Answer Justification paper on the AI2 Kaggle Dataset dataset? | P@1 |
What metrics were used to measure the IR++ model in the Tell Me Why: Using Question Answering as Distant Supervision for Answer Justification paper on the AI2 Kaggle Dataset dataset? | P@1 |
What metrics were used to measure the OUR APPROACH model in the Tell Me Why: Using Question Answering as Distant Supervision for Answer Justification paper on the AI2 Kaggle Dataset dataset? | P@1 |
What metrics were used to measure the ChatGPT model in the Can ChatGPT Replace Traditional KBQA Models? An In-depth Analysis of the Question Answering Performance of the GPT LLM Family paper on the GraphQuestions dataset? | Accuracy |
What metrics were used to measure the Longformer model in the MuLD: The Multitask Long Document Benchmark paper on the MuLD (HotpotQA) dataset? | BLEU-1, BLEU-4, METEOR, Rouge-L |
What metrics were used to measure the T5 model in the MuLD: The Multitask Long Document Benchmark paper on the MuLD (HotpotQA) dataset? | BLEU-1, BLEU-4, METEOR, Rouge-L |
What metrics were used to measure the Masque (NarrativeQA + MS MARCO) model in the Multi-style Generative Reading Comprehension paper on the NarrativeQA dataset? | Rouge-L, BLEU-1, BLEU-4, METEOR |
What metrics were used to measure the BERT-QA with Hard EM objective model in the A Discrete Hard EM Approach for Weakly Supervised Question Answering paper on the NarrativeQA dataset? | Rouge-L, BLEU-1, BLEU-4, METEOR |
What metrics were used to measure the Masque (NarrativeQA only) model in the Multi-style Generative Reading Comprehension paper on the NarrativeQA dataset? | Rouge-L, BLEU-1, BLEU-4, METEOR |
What metrics were used to measure the ConZNet model in the Cut to the Chase: A Context Zoom-in Network for Reading Comprehension paper on the NarrativeQA dataset? | Rouge-L, BLEU-1, BLEU-4, METEOR |
What metrics were used to measure the DecaProp model in the Densely Connected Attention Propagation for Reading Comprehension paper on the NarrativeQA dataset? | Rouge-L, BLEU-1, BLEU-4, METEOR |
What metrics were used to measure the MHPGM + NOIC model in the Commonsense for Generative Multi-Hop Question Answering Tasks paper on the NarrativeQA dataset? | Rouge-L, BLEU-1, BLEU-4, METEOR |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.