prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the LSTM Seq2Seq with RoBERTa model in the Are NLP Models really able to Solve Simple Math Word Problems? paper on the SVAMP dataset?
Execution Accuracy, Accuracy
What metrics were used to measure the Transformer with RoBERTa model in the Are NLP Models really able to Solve Simple Math Word Problems? paper on the SVAMP dataset?
Execution Accuracy, Accuracy
What metrics were used to measure the Toolformer model in the Toolformer: Language Models Can Teach Themselves to Use Tools paper on the SVAMP dataset?
Execution Accuracy, Accuracy
What metrics were used to measure the GPT-3 (175B) model in the Toolformer: Language Models Can Teach Themselves to Use Tools paper on the SVAMP dataset?
Execution Accuracy, Accuracy
What metrics were used to measure the Toolformer (disabled) model in the Toolformer: Language Models Can Teach Themselves to Use Tools paper on the SVAMP dataset?
Execution Accuracy, Accuracy
What metrics were used to measure the GPT-J model in the Toolformer: Language Models Can Teach Themselves to Use Tools paper on the SVAMP dataset?
Execution Accuracy, Accuracy
What metrics were used to measure the GPT-J + CC model in the Toolformer: Language Models Can Teach Themselves to Use Tools paper on the SVAMP dataset?
Execution Accuracy, Accuracy
What metrics were used to measure the OPT (66B) model in the Toolformer: Language Models Can Teach Themselves to Use Tools paper on the SVAMP dataset?
Execution Accuracy, Accuracy
What metrics were used to measure the GPT-4-code model (CSV, w/ code, SC, k=16) model in the Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-4-code model (CSV, w/ code) model in the Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the CR (GPT-4-turbo model, w/ code) model in the Cumulative Reasoning with Large Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-4-code model (w/ code) model in the Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-4-code model (w/o code) model in the Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the ToRA-Code 34B model (w/ code, SC, k=50) model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the CR (GPT-4 model, w/o code) model in the Cumulative Reasoning with Large Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the ToRA 70B (w/ code, SC, k=50) model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the SKiC (GPT-4 model) model in the Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the PHP (GPT-4 model) model in the Progressive-Hint Prompting Improves Reasoning in Large Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-4 model (w/ code, PAL) model in the PAL: Program-aided Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the ToRA-Code 34B (w/ code) model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the Minerva 540B (maj1@k, k=64) model in the Solving Quantitative Reasoning Problems with Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the ToRA 70B (w/ code) model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the PaLM 2 (few-shot, k=4, SC) model in the PaLM 2 Technical Report paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the ToRA-Code 13B (w/ code) model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the ToRA-Code 7B (w/ code) model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the Minerva 62B (maj1@k, k=64) model in the Solving Quantitative Reasoning Problems with Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the ToRA 13B (w/ code) model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-4 model in the Sparks of Artificial General Intelligence: Early experiments with GPT-4 paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the ToRA 7B (w/ code) model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the PaLM 2 (few-shot, k=4, CoT) model in the PaLM 2 Technical Report paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the Minerva 540B model in the Solving Quantitative Reasoning Problems with Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the Minerva 540B (5-shot) mCoT model in the Galactica: A Large Language Model for Science paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the Minerva 62B model in the Solving Quantitative Reasoning Problems with Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the MetaMath 70B model in the MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the Minerva 8B (maj1@k, k=64) model in the Solving Quantitative Reasoning Problems with Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the MetaMath 13B model in the MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the LLaMA 65B (maj1@k) model in the LLaMA: Open and Efficient Foundation Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GAL 120B (5-shot) mCoT model in the Galactica: A Large Language Model for Science paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the MetaMath 7B model in the MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the OpenAI davinci-002 model in the Solving Quantitative Reasoning Problems with Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GAL 120B <work> model in the Galactica: A Large Language Model for Science paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the LLaMA 33B-maj1@k model in the LLaMA: Open and Efficient Foundation Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the Minerva 8B model in the Solving Quantitative Reasoning Problems with Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GAL 30B (5-shot) mCoT model in the Galactica: A Large Language Model for Science paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GAL 30B <work> model in the Galactica: A Large Language Model for Science paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the LLaMA 65B model in the LLaMA: Open and Efficient Foundation Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the PaLM 540B model in the Solving Quantitative Reasoning Problems with Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the PaLM 540B (5-shot) mCoT model in the Galactica: A Large Language Model for Science paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the LLaMA 13B-maj1@k model in the LLaMA: Open and Efficient Foundation Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the LLaMA 33B model in the LLaMA: Open and Efficient Foundation Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the LLaMA 7B-maj1@k model in the LLaMA: Open and Efficient Foundation Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-2 (1.5B) model in the Measuring Mathematical Problem Solving With the MATH Dataset paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-2 (0.7B) model in the Measuring Mathematical Problem Solving With the MATH Dataset paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-2 (0.3B) model in the Measuring Mathematical Problem Solving With the MATH Dataset paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-3 13B model in the Measuring Mathematical Problem Solving With the MATH Dataset paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-2 (0.1B) model in the Measuring Mathematical Problem Solving With the MATH Dataset paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-3-175B (few-shot) model in the Measuring Mathematical Problem Solving With the MATH Dataset paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-3 175B (8-shot) model in the Galactica: A Large Language Model for Science paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the PaLM 62B model in the Solving Quantitative Reasoning Problems with Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the LLaMA 13B model in the LLaMA: Open and Efficient Foundation Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-3-13B (few-shot) model in the Measuring Mathematical Problem Solving With the MATH Dataset paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the LLaMA 7B model in the LLaMA: Open and Efficient Foundation Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the GPT-3 (2.7B) model in the Measuring Mathematical Problem Solving With the MATH Dataset paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the PaLM 8B model in the Solving Quantitative Reasoning Problems with Language Models paper on the MATH dataset?
Accuracy, Parameters (Billions)
What metrics were used to measure the EPT-X model in the EPT-X: An Expression-Pointer Transformer model that generates eXplanations for numbers paper on the PEN dataset?
Accuracy (%)
What metrics were used to measure the KGPOOL model in the KGPool: Dynamic Knowledge Graph Context Selection for Relation Extraction paper on the New York Times Corpus dataset?
P@10%, P@30%, AUC, Average Precision
What metrics were used to measure the RECON model in the RECON: Relation Extraction using Knowledge Graph Context in a Graph Neural Network paper on the New York Times Corpus dataset?
P@10%, P@30%, AUC, Average Precision
What metrics were used to measure the CGRE model in the Distantly-Supervised Long-Tailed Relation Extraction Using Constraint Graphs paper on the New York Times Corpus dataset?
P@10%, P@30%, AUC, Average Precision
What metrics were used to measure the BGWA model in the Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention paper on the New York Times Corpus dataset?
P@10%, P@30%, AUC, Average Precision
What metrics were used to measure the PCNN+ATT model in the Neural Relation Extraction with Selective Attention over Instances paper on the New York Times Corpus dataset?
P@10%, P@30%, AUC, Average Precision
What metrics were used to measure the PCNN model in the Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks paper on the New York Times Corpus dataset?
P@10%, P@30%, AUC, Average Precision
What metrics were used to measure the REDSandT model in the Improving Distantly-Supervised Relation Extraction through BERT-based Label & Instance Embeddings paper on the New York Times Corpus dataset?
P@10%, P@30%, AUC, Average Precision
What metrics were used to measure the BiGRU+WLA+EWA model in the Neural Relation Extraction via Inner-Sentence Noise Reduction and Transfer Learning paper on the New York Times Corpus dataset?
P@10%, P@30%, AUC, Average Precision
What metrics were used to measure the BGRU-SET model in the Neural Relation Extraction via Inner-Sentence Noise Reduction and Transfer Learning paper on the New York Times Corpus dataset?
P@10%, P@30%, AUC, Average Precision
What metrics were used to measure the DocDS model in the From Bag of Sentences to Document: Distantly Supervised Relation Extraction via Machine Reading Comprehension paper on the NYT dataset?
P@100, P@200, P@300, PR AUC
What metrics were used to measure the Distilled Network model in the Distilled Neural Networks for Efficient Learning to Rank paper on the MSLR-WEB30K dataset?
nDCG@10
What metrics were used to measure the ConAE-128 model in the Dimension Reduction for Efficient Dense Retrieval via Conditional Autoencoder paper on the MS MARCO dataset?
Time (ms), MRR@10
What metrics were used to measure the ConAE-256 model in the Dimension Reduction for Efficient Dense Retrieval via Conditional Autoencoder paper on the MS MARCO dataset?
Time (ms), MRR@10
What metrics were used to measure the RetroMAE v2 model in the RetroMAE v2: Duplex Masked Auto-Encoder For Pre-Training Retrieval-Oriented Language Models paper on the MS MARCO dataset?
Time (ms), MRR@10
What metrics were used to measure the BERT+CONCEPT FILTER model in the Semantic Enrichment of Pretrained Embedding Output for Unsupervised IR paper on the Ohsumed dataset?
NDCG
What metrics were used to measure the Two-tower Bi-Encoder (RoBERTa) model in the A Statutory Article Retrieval Dataset in French paper on the BSARD dataset?
Recall@100, Recall@200, Recall@500
What metrics were used to measure the Siamese Bi-Encoder (RoBERTa) model in the A Statutory Article Retrieval Dataset in French paper on the BSARD dataset?
Recall@100, Recall@200, Recall@500
What metrics were used to measure the BM25 model in the A Statutory Article Retrieval Dataset in French paper on the BSARD dataset?
Recall@100, Recall@200, Recall@500
What metrics were used to measure the MIND model in the Multi-Interest Network with Dynamic Routing for Recommendation at Tmall paper on the Amazon dataset?
HR@30
What metrics were used to measure the SGPT-5.8B-msmarco model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset?
nDCG@10
What metrics were used to measure the hpipubcommon model in the HPI-DHC at TREC 2018 Precision Medicine Track paper on the TREC-PM dataset?
infNDCG
What metrics were used to measure the hpictall model in the HPI-DHC at TREC 2018 Precision Medicine Track paper on the TREC-PM dataset?
infNDCG
What metrics were used to measure the RetroMAE model in the RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder paper on the MSMARCO dataset?
MRR@10
What metrics were used to measure the SGPT-BE-5.8B model in the SGPT: GPT Sentence Embeddings for Semantic Search paper on the CQADupStack dataset?
mAP@100
What metrics were used to measure the TSDAE model in the TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning paper on the CQADupStack dataset?
mAP@100
What metrics were used to measure the ACRV Baseline model in the paper on the Semantic Scene Understanding Challenge (active actuation & ground-truth localisation) dataset?
OMQ, avg_pairwise, avg_label, avg_spatial, avg_fp_quality
What metrics were used to measure the ACRV Baseline model in the paper on the Semantic Scene Understanding Challenge (passive actuation & ground-truth localisation) dataset?
OMQ, avg_pairwise, avg_label, avg_spatial, avg_fp_quality
What metrics were used to measure the Team VGAI (TCS Research) model in the paper on the Semantic Scene Understanding Challenge (passive actuation & ground-truth localisation) dataset?
OMQ, avg_pairwise, avg_label, avg_spatial, avg_fp_quality
What metrics were used to measure the Demo_semantic_SLAM model in the paper on the Semantic Scene Understanding Challenge (passive actuation & ground-truth localisation) dataset?
OMQ, avg_pairwise, avg_label, avg_spatial, avg_fp_quality
What metrics were used to measure the CPN(ResNet-101) model in the Context Prior for Scene Segmentation paper on the ADE20K val dataset?
Mean IoU
What metrics were used to measure the Heterogeneous Dynamic Convolutions model in the Geometry-Aware Supertagging with Heterogeneous Dynamic Convolutions paper on the CCGbank dataset?
Accuracy
What metrics were used to measure the NeST-CCG + BERT model in the Supertagging Combinatory Categorial Grammar with Attentive Graph Convolutional Networks paper on the CCGbank dataset?
Accuracy
What metrics were used to measure the CVT + Multi-task + Large model in the Semi-Supervised Sequence Modeling with Cross-View Training paper on the CCGbank dataset?
Accuracy
What metrics were used to measure the Lewis et al. model in the LSTM CCG Parsing paper on the CCGbank dataset?
Accuracy
What metrics were used to measure the BiLSTM-LAN model in the Hierarchically-Refined Label Attention Network for Sequence Labeling paper on the CCGbank dataset?
Accuracy