prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the OPT (few-shot, k=5) model in the Galactica: A Large Language Model for Science paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the Bing Chat model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-Biology dataset?
Accuracy
What metrics were used to measure the ChatGPT model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-Biology dataset?
Accuracy
What metrics were used to measure the monoT5-3B model in the No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval paper on the NQ (BEIR) dataset?
nDCG@10
What metrics were used to measure the BM25+CE model in the BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models paper on the NQ (BEIR) dataset?
nDCG@10
What metrics were used to measure the ColBERT model in the BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models paper on the NQ (BEIR) dataset?
nDCG@10
What metrics were used to measure the SGPT-BE-5.8B model in the SGPT: GPT Sentence Embeddings for Semantic Search paper on the NQ (BEIR) dataset?
nDCG@10
What metrics were used to measure the SGPT-CE-6.1B model in the SGPT: GPT Sentence Embeddings for Semantic Search paper on the NQ (BEIR) dataset?
nDCG@10
What metrics were used to measure the BioMedGPT-10B model in the BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine paper on the PubChemQA dataset?
BLEU-2, BLEU-4, ROUGE-1, ROUGE-2, ROUGE-L, MEATOR
What metrics were used to measure the Llama2-7B-chat model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the PubChemQA dataset?
BLEU-2, BLEU-4, ROUGE-1, ROUGE-2, ROUGE-L, MEATOR
What metrics were used to measure the PAAG model in the Product-Aware Answer Generation in E-Commerce Question-Answering paper on the JD Product Question Answer dataset?
BLEU
What metrics were used to measure the BioMedGPT-10B model in the BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine paper on the UniProtQA dataset?
BLEU-2, BLEU-4, ROUGE-1, ROUGE-2, ROUGE-L, MEATOR
What metrics were used to measure the Llama2-7B-chat model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the UniProtQA dataset?
BLEU-2, BLEU-4, ROUGE-1, ROUGE-2, ROUGE-L, MEATOR
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the PIQA dataset?
Accuracy
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the PIQA dataset?
Accuracy
What metrics were used to measure the LLaMA 65B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the PIQA dataset?
Accuracy
What metrics were used to measure the LLaMA 2 70B (zero-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the PIQA dataset?
Accuracy
What metrics were used to measure the LLaMA 33B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the PIQA dataset?
Accuracy
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the PIQA dataset?
Accuracy
What metrics were used to measure the MT-NLG 530B (zero-shot) model in the Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism paper on the PIQA dataset?
Accuracy
What metrics were used to measure the LLaMA 2 34B (zero-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the PIQA dataset?
Accuracy
What metrics were used to measure the Gopher 280B (zero-shot) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the PIQA dataset?
Accuracy
What metrics were used to measure the Chinchilla 70B (zero-shot) model in the Training Compute-Optimal Large Language Models paper on the PIQA dataset?
Accuracy
What metrics were used to measure the OPT-175B model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the PIQA dataset?
Accuracy
What metrics were used to measure the GPT-3 175B (zero-shot) model in the Language Models are Few-Shot Learners paper on the PIQA dataset?
Accuracy
What metrics were used to measure the SparseGPT (175B, 50% Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the PIQA dataset?
Accuracy
What metrics were used to measure the FLAN 137B (zero-shot) model in the Finetuned Language Models Are Zero-Shot Learners paper on the PIQA dataset?
Accuracy
What metrics were used to measure the LLaMA 2 13B (zero-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the PIQA dataset?
Accuracy
What metrics were used to measure the LLaMA 13B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the PIQA dataset?
Accuracy
What metrics were used to measure the LLaMA 7B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the PIQA dataset?
Accuracy
What metrics were used to measure the SparseGPT (175B, 4:8 Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the PIQA dataset?
Accuracy
What metrics were used to measure the SparseGPT (175B, 2:4 Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the PIQA dataset?
Accuracy
What metrics were used to measure the LLaMA 2 7B (zero-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the PIQA dataset?
Accuracy
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the PIQA dataset?
Accuracy
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the PIQA dataset?
Accuracy
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the PIQA dataset?
Accuracy
What metrics were used to measure the Open-LLaMA-3B-v2 model in the Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning paper on the PIQA dataset?
Accuracy
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the PIQA dataset?
Accuracy
What metrics were used to measure the Sheared-LLaMA-2.7B model in the Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning paper on the PIQA dataset?
Accuracy
What metrics were used to measure the Sheared-LLaMA-1.3B model in the Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning paper on the PIQA dataset?
Accuracy
What metrics were used to measure the FLAN-T5-large model in the LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions paper on the PIQA dataset?
Accuracy
What metrics were used to measure the LaMini-GPT2-XL model in the LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions paper on the PIQA dataset?
Accuracy
What metrics were used to measure the LaMini-Flan-T5 large model in the LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions paper on the PIQA dataset?
Accuracy
What metrics were used to measure the GPT2-XL model in the LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions paper on the PIQA dataset?
Accuracy
What metrics were used to measure the T5-large model in the LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions paper on the PIQA dataset?
Accuracy
What metrics were used to measure the OPT-175B (50% Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the PIQA dataset?
Accuracy
What metrics were used to measure the ChatGPT model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-Literature dataset?
Accuracy
What metrics were used to measure the Bing Chat model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-Literature dataset?
Accuracy
What metrics were used to measure the Fast Weight Memory model in the Learning Associative Inference Using Fast Weight Memory paper on the catbAbI QA-mode dataset?
1:1 Accuracy
What metrics were used to measure the Metalearned Neural Memory (plastic) model in the Learning Associative Inference Using Fast Weight Memory paper on the catbAbI QA-mode dataset?
1:1 Accuracy
What metrics were used to measure the AWD-Transformer XL model in the Learning Associative Inference Using Fast Weight Memory paper on the catbAbI QA-mode dataset?
1:1 Accuracy
What metrics were used to measure the AWD-LSTM model in the Learning Associative Inference Using Fast Weight Memory paper on the catbAbI QA-mode dataset?
1:1 Accuracy
What metrics were used to measure the CamemBERT-Large model in the FQuAD: French Question Answering Dataset paper on the FQuAD dataset?
EM, F1
What metrics were used to measure the XLM-RoBERTa-Large model in the FQuAD: French Question Answering Dataset paper on the FQuAD dataset?
EM, F1
What metrics were used to measure the CamemBERT-Base model in the FQuAD: French Question Answering Dataset paper on the FQuAD dataset?
EM, F1
What metrics were used to measure the Camembert-Base-SquadFR-Fquad-Piaf model in the Project PIAF: Building a Native French Question-Answering Dataset paper on the FQuAD dataset?
EM, F1
What metrics were used to measure the XLM-RoBERTa-Base model in the FQuAD: French Question Answering Dataset paper on the FQuAD dataset?
EM, F1
What metrics were used to measure the Fmikaelian-Camembert-Base-Fquad model in the paper on the FQuAD dataset?
EM, F1
What metrics were used to measure the LePetit model in the On the importance of pre-training data volume for compact language models paper on the FQuAD dataset?
EM, F1
What metrics were used to measure the Vector Database (ChromaDB) model in the RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language Models paper on the DuoRC dataset?
Accuracy
What metrics were used to measure the Hybrid-RecallM model in the RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language Models paper on the DuoRC dataset?
Accuracy
What metrics were used to measure the RecallM model in the RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language Models paper on the DuoRC dataset?
Accuracy
What metrics were used to measure the Longformer model in the MuLD: The Multitask Long Document Benchmark paper on the MuLD (NarrativeQA) dataset?
BLEU-1, BLEU-4, METEOR, Rouge-L
What metrics were used to measure the T5 model in the MuLD: The Multitask Long Document Benchmark paper on the MuLD (NarrativeQA) dataset?
BLEU-1, BLEU-4, METEOR, Rouge-L
What metrics were used to measure the ECONET model in the ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning paper on the Torque dataset?
F1, EM, C
What metrics were used to measure the RoBERTa-large model in the TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions paper on the Torque dataset?
F1, EM, C
What metrics were used to measure the ByT5 (small) model in the ByT5: Towards a token-free future with pre-trained byte-to-byte models paper on the TweetQA dataset?
BLEU-1, ROUGE-L
What metrics were used to measure the mT5 model in the ByT5: Towards a token-free future with pre-trained byte-to-byte models paper on the TweetQA dataset?
BLEU-1, ROUGE-L
What metrics were used to measure the ByT5 model in the ByT5: Towards a token-free future with pre-trained byte-to-byte models paper on the TweetQA dataset?
BLEU-1, ROUGE-L
What metrics were used to measure the G-DAUG-Combo + RoBERTa-Large model in the Generative Data Augmentation for Commonsense Reasoning paper on the CODAH dataset?
Accuracy
What metrics were used to measure the BERT Large model in the CODAH: An Adversarially Authored Question-Answer Dataset for Common Sense paper on the CODAH dataset?
Accuracy
What metrics were used to measure the Golden Transformer model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the Human Benchmark model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the ruRoberta-large finetune model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the ruBert-large finetune model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the ruT5-base-finetune model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the ruBert-base finetune model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the ruT5-large-finetune model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the SBERT_Large_mt_ru_finetuning model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the SBERT_Large model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the MT5 Large model in the mT5: A massively multilingual pre-trained text-to-text transformer paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the heuristic majority model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the RuBERT plain model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the YaLM 1.0B few-shot model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the RuGPT3Medium model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the Multilingual Bert model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the Baseline TF-IDF1.1 model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the RuGPT3Small model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the RuBERT conversational model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the RuGPT3Large model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the RuGPT3XL few-shot model in the paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the Random weighted model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the majority_class model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the DaNetQA dataset?
Accuracy
What metrics were used to measure the sMIM (1024) + model in the SentenceMIM: A Latent Variable Language Model paper on the YahooCQA dataset?
P@1, MRR
What metrics were used to measure the sMIM (1024) model in the SentenceMIM: A Latent Variable Language Model paper on the YahooCQA dataset?
P@1, MRR
What metrics were used to measure the HyperQA model in the Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering paper on the YahooCQA dataset?
P@1, MRR
What metrics were used to measure the AP-BiLSTM model in the Attentive Pooling Networks paper on the YahooCQA dataset?
P@1, MRR
What metrics were used to measure the AP-CNN model in the Attentive Pooling Networks paper on the YahooCQA dataset?
P@1, MRR
What metrics were used to measure the LSTM model in the Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering paper on the YahooCQA dataset?
P@1, MRR
What metrics were used to measure the CNN model in the Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering paper on the YahooCQA dataset?
P@1, MRR