prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the LSTM model in the Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes paper on the bAbi dataset?
Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate
What metrics were used to measure the RR model in the Recurrent Relational Networks paper on the bAbi dataset?
Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate
What metrics were used to measure the ReMO model in the Finding ReMO (Related Memory Object): A Simple Neural Architecture for Text based Reasoning paper on the bAbi dataset?
Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate
What metrics were used to measure the NUTM model in the Neural Stored-program Memory paper on the bAbi dataset?
Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate
What metrics were used to measure the SDNC model in the Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes paper on the bAbi dataset?
Accuracy (trained on 10k), Accuracy (trained on 1k), Mean Error Rate
What metrics were used to measure the GPT-4 (RLHF) model in the GPT-4 Technical Report paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the LLaMA-2-Chat-13B + Representation Control (Contrast Vector) model in the Representation Engineering: A Top-Down Approach to AI Transparency paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the LLaMA-2-Chat-7B + Representation Control (Contrast Vector) model in the Representation Engineering: A Top-Down Approach to AI Transparency paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the Vicuna 7B + Inference Time Intervention (ITI) model in the Inference-Time Intervention: Eliciting Truthful Answers from a Language Model paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the Alpaca 7B + Inference Time Intervention (ITI) model in the Inference-Time Intervention: Eliciting Truthful Answers from a Language Model paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the Gopher 280B (zero-shot, Our Prompt + Choices) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the LLaMA 7B + Inference Time Intervention (ITI) model in the Inference-Time Intervention: Eliciting Truthful Answers from a Language Model paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the GAL 120B model in the Galactica: A Large Language Model for Science paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the Gopher 7.1 (zero-shot, QA prompts) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the GAL 30B model in the Galactica: A Large Language Model for Science paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the Gopher 7.1B (zero-shot, Our Prompt + Choices) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the Gopher 1.4 (zero-shot, QA prompts) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the GPT-2 1.5B model in the TruthfulQA: Measuring How Models Mimic Human Falsehoods paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the Gopher 1.4B (zero-shot, Our Prompt + Choices) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the GPT-3 175B model in the TruthfulQA: Measuring How Models Mimic Human Falsehoods paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the OPT 175B model in the Galactica: A Large Language Model for Science paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the GPT-J 6B model in the TruthfulQA: Measuring How Models Mimic Human Falsehoods paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the UnifiedQA 3B model in the TruthfulQA: Measuring How Models Mimic Human Falsehoods paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the GAL 125M model in the Galactica: A Large Language Model for Science paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the GAL 1.3B model in the Galactica: A Large Language Model for Science paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the GAL 6.7B model in the Galactica: A Large Language Model for Science paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the Gopher 280B (zero-shot, QA prompts) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the LLaMA 65B model in the LLaMA: Open and Efficient Foundation Language Models paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the LLaMA 33B model in the LLaMA: Open and Efficient Foundation Language Models paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the LLaMA 13B model in the LLaMA: Open and Efficient Foundation Language Models paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the LLaMA 7B model in the LLaMA: Open and Efficient Foundation Language Models paper on the TruthfulQA dataset?
MC1, MC2, % true, % info, % true (GPT-judge), BLEURT, ROUGE, BLEU
What metrics were used to measure the syntax, frame, coreference, and word embedding features model in the A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data paper on the MCTest-160 dataset?
Accuracy
What metrics were used to measure the Atlas (full, Wiki-dec-2018 index) model in the Atlas: Few-shot Learning with Retrieval Augmented Language Models paper on the Natural Questions dataset?
EM
What metrics were used to measure the Atlas (full, Wiki-dec-2021+CC index) model in the Atlas: Few-shot Learning with Retrieval Augmented Language Models paper on the Natural Questions dataset?
EM
What metrics were used to measure the FiE model in the FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering paper on the Natural Questions dataset?
EM
What metrics were used to measure the R2-D2 (full) model in the R2-D2: A Modular Baseline for Open-Domain Question Answering paper on the Natural Questions dataset?
EM
What metrics were used to measure the ReAtt model in the Retrieval as Attention: End-to-end Learning of Retrieval and Reading within a Single Transformer paper on the Natural Questions dataset?
EM
What metrics were used to measure the FiD-KD (full) model in the Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering paper on the Natural Questions dataset?
EM
What metrics were used to measure the EMDR^2 model in the End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering paper on the Natural Questions dataset?
EM
What metrics were used to measure the FID (full) model in the Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering paper on the Natural Questions dataset?
EM
What metrics were used to measure the RETRO + DPR (full) model in the Improving language models by retrieving from trillions of tokens paper on the Natural Questions dataset?
EM
What metrics were used to measure the Codex + REPLUG LSR (Few-Shot) model in the REPLUG: Retrieval-Augmented Black-Box Language Models paper on the Natural Questions dataset?
EM
What metrics were used to measure the Atlas (few-shot, k=64, Wiki-Dec-2018 index) model in the Atlas: Few-shot Learning with Retrieval Augmented Language Models paper on the Natural Questions dataset?
EM
What metrics were used to measure the Codex + REPLUG (Few-Shot) model in the REPLUG: Retrieval-Augmented Black-Box Language Models paper on the Natural Questions dataset?
EM
What metrics were used to measure the RAG model in the Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks paper on the Natural Questions dataset?
EM
What metrics were used to measure the Atlas (few-shot, k=64, Wiki-dec-2021+CC index) model in the Atlas: Few-shot Learning with Retrieval Augmented Language Models paper on the Natural Questions dataset?
EM
What metrics were used to measure the DPR model in the Dense Passage Retrieval for Open-Domain Question Answering paper on the Natural Questions dataset?
EM
What metrics were used to measure the REALM model in the REALM: Retrieval-Augmented Language Model Pre-Training paper on the Natural Questions dataset?
EM
What metrics were used to measure the LLaMA 65B (few-shot, k=64) model in the LLaMA: Open and Efficient Foundation Language Models paper on the Natural Questions dataset?
EM
What metrics were used to measure the PaLM-540B (Few-Shot, k=64) model in the PaLM: Scaling Language Modeling with Pathways paper on the Natural Questions dataset?
EM
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the Natural Questions dataset?
EM
What metrics were used to measure the Chinchilla (few-shot, k=64) model in the Training Compute-Optimal Large Language Models paper on the Natural Questions dataset?
EM
What metrics were used to measure the LLaMA 65B (few-shot, k=5) model in the LLaMA: Open and Efficient Foundation Language Models paper on the Natural Questions dataset?
EM
What metrics were used to measure the LLaMA 2 70B (one-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the Natural Questions dataset?
EM
What metrics were used to measure the GLaM 62B/64E (Few-Shot) model in the GLaM: Efficient Scaling of Language Models with Mixture-of-Experts paper on the Natural Questions dataset?
EM
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the Natural Questions dataset?
EM
What metrics were used to measure the LLaMA 65B (one-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the Natural Questions dataset?
EM
What metrics were used to measure the GPT-3 175B (Few-Shot, k=64) model in the Language Models are Few-Shot Learners paper on the Natural Questions dataset?
EM
What metrics were used to measure the PaLM-540B (One-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the Natural Questions dataset?
EM
What metrics were used to measure the Gopher (few-shot, k=64) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the Natural Questions dataset?
EM
What metrics were used to measure the GLaM 62B/64E (One-Shot) model in the GLaM: Efficient Scaling of Language Models with Mixture-of-Experts paper on the Natural Questions dataset?
EM
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the Natural Questions dataset?
EM
What metrics were used to measure the LLaMA 33B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the Natural Questions dataset?
EM
What metrics were used to measure the GLaM 62B/64E (Zero-Shot) model in the GLaM: Efficient Scaling of Language Models with Mixture-of-Experts paper on the Natural Questions dataset?
EM
What metrics were used to measure the PaLM-540B (Zero-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the Natural Questions dataset?
EM
What metrics were used to measure the Neo-6B (QA) model in the Ask Me Anything: A simple strategy for prompting language models paper on the Natural Questions dataset?
EM
What metrics were used to measure the Neo-6B (QA + WS) model in the Ask Me Anything: A simple strategy for prompting language models paper on the Natural Questions dataset?
EM
What metrics were used to measure the Neo-6B (Few-Shot) model in the Ask Me Anything: A simple strategy for prompting language models paper on the Natural Questions dataset?
EM
What metrics were used to measure the Bing Chat model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-Chemistry dataset?
Accuracy
What metrics were used to measure the ChatGPT model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-Chemistry dataset?
Accuracy
What metrics were used to measure the LUKE model in the LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention paper on the TACRED dataset?
Relation F1
What metrics were used to measure the MuCoT model in the MuCoT: Multilingual Contrastive Training for Question-Answering in Low-resource Languages paper on the ChAII - Hindi and Tamil Question Answering dataset?
Jaccard
What metrics were used to measure the KGT5 model in the There is No Big Brother or Small Brother: Knowledge Infusion in Language Models for Link Prediction and Question Answering paper on the AviationQA dataset?
Hits@1
What metrics were used to measure the ChatGPT model in the Can ChatGPT Replace Traditional KBQA Models? An In-depth Analysis of the Question Answering Performance of the GPT LLM Family paper on the WebQuestionsSP dataset?
Accuracy
What metrics were used to measure the BERT-Japanese model in the JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension paper on the JaQuAD dataset?
Exact Match, F1
What metrics were used to measure the FiE+PAQ model in the FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the FiE model in the FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the FiDO model in the FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the RAG model in the Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the PaLM-540B (Few-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the T5.1.1-XXL+SSM model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the DPR model in the Dense Passage Retrieval for Open-Domain Question Answering paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the GPT-3-175B (Few-Shot) model in the Language Models are Few-Shot Learners paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the REALM model in the REALM: Retrieval-Augmented Language Model Pre-Training paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the ORQA model in the Latent Retrieval for Weakly Supervised Open Domain Question Answering paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the GPT-3-175B (One-Shot) model in the Language Models are Few-Shot Learners paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the PaLM-540B (One-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the GLaM 62B/64E (Zero-Shot) model in the GLaM: Efficient Scaling of Language Models with Mixture-of-Experts paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the GPT-3-175B (Zero-Shot) model in the Language Models are Few-Shot Learners paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the PaLM-540B (Zero-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the Memory Networks (ensemble) model in the Large-scale Simple Question Answering with Memory Networks paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the Subgraph embeddings model in the Question Answering with Subgraph Embeddings paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the Weakly Supervised Embeddings model in the Open Question Answering with Weakly Supervised Embedding Models paper on the WebQuestions dataset?
EM, F1
What metrics were used to measure the XLNet (single model) model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the DeBERTa (large) model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the ALBERT model in the ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the Quora Question Pairs dataset?
Accuracy