prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the Xu, et al. model in the Improving AMR Parsing with Sequence-to-Sequence Pre-training paper on the LDC2017T10 dataset?
Smatch
What metrics were used to measure the stack-Transformer + self-learning (IBM) model in the Pushing the Limits of AMR Parsing with Self-Learning paper on the LDC2017T10 dataset?
Smatch
What metrics were used to measure the Cai and Lam model in the AMR Parsing via Graph-Sequence Iterative Inference paper on the LDC2017T10 dataset?
Smatch
What metrics were used to measure the ND+AD+LV model in the Levi Graph AMR Parser using Heterogeneous Attention paper on the LDC2017T10 dataset?
Smatch
What metrics were used to measure the stack-Transformer (IBM) model in the Transition-based Parsing with Stack-Transformers paper on the LDC2017T10 dataset?
Smatch
What metrics were used to measure the Zhang et al. model in the Broad-Coverage Semantic Parsing as Transduction paper on the LDC2017T10 dataset?
Smatch
What metrics were used to measure the Sequence-to-Graph Transduction model in the AMR Parsing as Sequence-to-Graph Transduction paper on the LDC2017T10 dataset?
Smatch
What metrics were used to measure the Lyu et al. 2021. Full model in the A Differentiable Relaxation of Graph Segmentation and Alignment for AMR Parsing paper on the LDC2017T10 dataset?
Smatch
What metrics were used to measure the Joint model model in the AMR Parsing as Graph Prediction with Latent Alignment paper on the LDC2017T10 dataset?
Smatch
What metrics were used to measure the Rewarding Smatch (IBM) model in the Rewarding Smatch: Transition-Based AMR Parsing with Reinforcement Learning paper on the LDC2017T10 dataset?
Smatch
What metrics were used to measure the Cai and Lam model in the Core Semantic First: A Top-down Approach for AMR Parsing paper on the LDC2017T10 dataset?
Smatch
What metrics were used to measure the ChSeq + 100K model in the Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations paper on the LDC2017T10 dataset?
Smatch
What metrics were used to measure the Neural-Pointer model in the Oxford at SemEval-2017 Task 9: Neural AMR Parsing with Pointer-Augmented Attention paper on the LDC2017T10 dataset?
Smatch
What metrics were used to measure the AMRBART large model in the Graph Pre-training for AMR Parsing and Generation paper on the New3 dataset?
Smatch
What metrics were used to measure the Graphene Smatch model in the Ensembling Graph Predictions for AMR Parsing paper on the New3 dataset?
Smatch
What metrics were used to measure the SPRING DFS model in the One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline paper on the New3 dataset?
Smatch
What metrics were used to measure the SPRING DFS + silver model in the One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline paper on the New3 dataset?
Smatch
What metrics were used to measure the AMRBART large model in the Graph Pre-training for AMR Parsing and Generation paper on the The Little Prince dataset?
Smatch
What metrics were used to measure the Graphene Smatch model in the Ensembling Graph Predictions for AMR Parsing paper on the The Little Prince dataset?
Smatch
What metrics were used to measure the SPRING DFS + silver model in the One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline paper on the The Little Prince dataset?
Smatch
What metrics were used to measure the SPRING DFS model in the One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline paper on the The Little Prince dataset?
Smatch
What metrics were used to measure the Attach-Juxtapose Parser + BERT model in the Strongly Incremental Constituency Parsing with Graph Neural Networks paper on the CTB5 dataset?
F1 score
What metrics were used to measure the SAPar + BERT model in the Improving Constituency Parsing with Span Attention paper on the CTB5 dataset?
F1 score
What metrics were used to measure the N-ary semi-markov + BERT model in the N-ary Constituent Tree Parsing with Recursive Semi-Markov Model paper on the CTB5 dataset?
F1 score
What metrics were used to measure the CRF Parser + BERT model in the Fast and Accurate Neural CRF Constituency Parsing paper on the CTB5 dataset?
F1 score
What metrics were used to measure the Kitaev etal. 2019 model in the Multilingual Constituency Parsing with Self-Attention and Pre-Training paper on the CTB5 dataset?
F1 score
What metrics were used to measure the CRF Parser model in the Fast and Accurate Neural CRF Constituency Parsing paper on the CTB5 dataset?
F1 score
What metrics were used to measure the Zhou etal. 2019 model in the Head-Driven Phrase Structure Grammar Parsing on Penn Treebank paper on the CTB5 dataset?
F1 score
What metrics were used to measure the Kitaev etal. 2018 model in the Constituency Parsing with a Self-Attentive Encoder paper on the CTB5 dataset?
F1 score
What metrics were used to measure the CRF Parser + Electra model in the Fast and Accurate Neural CRF Constituency Parsing paper on the CTB7 dataset?
F1 score
What metrics were used to measure the CRF Parser + BERT model in the Fast and Accurate Neural CRF Constituency Parsing paper on the CTB7 dataset?
F1 score
What metrics were used to measure the CRF Parser model in the Fast and Accurate Neural CRF Constituency Parsing paper on the CTB7 dataset?
F1 score
What metrics were used to measure the SAPar + XLNet model in the Improving Constituency Parsing with Span Attention paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the Label Attention Layer + HPSG + XLNet model in the Rethinking Self-Attention: Towards Interpretability in Neural Parsing paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the Attach-Juxtapose Parser + XLNet model in the Strongly Incremental Constituency Parsing with Graph Neural Networks paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the Head-Driven Phrase Structure Grammar Parsing (Joint) + XLNet model in the Head-Driven Phrase Structure Grammar Parsing on Penn Treebank paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the CRF Parser + RoBERTa model in the Fast and Accurate Neural CRF Constituency Parsing paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the N-ary semi-markov + BERT-large model in the N-ary Constituent Tree Parsing with Recursive Semi-Markov Model paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the NFC + BERT-large model in the Investigating Non-local Features for Neural Constituency Parsing paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the Head-Driven Phrase Structure Grammar Parsing (Joint) + BERT model in the Head-Driven Phrase Structure Grammar Parsing on Penn Treebank paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the CRF Parser + BERT model in the Fast and Accurate Neural CRF Constituency Parsing paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the CNN Large + fine-tune model in the Cloze-driven Pretraining of Self-attention Networks paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the SpanRel model in the Generalizing Natural Language Analysis through Span-relation Representations paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the Tetra Tagging model in the Tetra-Tagging: Word-Synchronous Parsing with Linear-Time Inference paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the Self-attentive encoder + ELMo model in the Constituency Parsing with a Self-Attentive Encoder paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the Model combination model in the Improving Neural Parsing by Disentangling Model Combination and Reranking Effects paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the LSTM Encoder-Decoder + LSTM-LM model in the Direct Output Connection for a High-Rank Language Model paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the LSTM Encoder-Decoder + LSTM-LM model in the An Empirical Study of Building a Strong Baseline for Constituency Parsing paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the In-order model in the In-Order Transition-based Constituent Parsing paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the CRF Parser model in the Fast and Accurate Neural CRF Constituency Parsing paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the Semi-supervised LSTM-LM model in the Parsing as Language Modeling paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the Stack-only RNNG model in the What Do Recurrent Neural Network Grammars Learn About Syntax? paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the Transformer model in the Attention Is All You Need paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the Parse fusion model in the Syntactic Parse Fusion paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the Semi-supervised LSTM model in the Grammar as a Foreign Language paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the Self-training model in the Effective Self-Training for Parsing paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the RNN Grammar model in the Recurrent Neural Network Grammars paper on the Penn Treebank dataset?
F1 score
What metrics were used to measure the SAPar model in the Improving Constituency Parsing with Span Attention paper on the ATB dataset?
F1
What metrics were used to measure the ASTactic model in the Learning to Prove Theorems via Interacting with Proof Assistants paper on the CoqGym dataset?
Percentage correct
What metrics were used to measure the Lean GPT-f model in the MiniF2F: a cross-system benchmark for formal Olympiad-level mathematics paper on the miniF2F-valid dataset?
Pass@8, Pass@64, Pass@1, Pass@100
What metrics were used to measure the Metamath GPT-f model in the MiniF2F: a cross-system benchmark for formal Olympiad-level mathematics paper on the miniF2F-valid dataset?
Pass@8, Pass@64, Pass@1, Pass@100
What metrics were used to measure the Evariste model in the HyperTree Proof Search for Neural Theorem Proving paper on the miniF2F-valid dataset?
Pass@8, Pass@64, Pass@1, Pass@100
What metrics were used to measure the Evariste-7d model in the HyperTree Proof Search for Neural Theorem Proving paper on the miniF2F-valid dataset?
Pass@8, Pass@64, Pass@1, Pass@100
What metrics were used to measure the GPT-f model in the HyperTree Proof Search for Neural Theorem Proving paper on the miniF2F-valid dataset?
Pass@8, Pass@64, Pass@1, Pass@100
What metrics were used to measure the Evariste-1d model in the HyperTree Proof Search for Neural Theorem Proving paper on the miniF2F-valid dataset?
Pass@8, Pass@64, Pass@1, Pass@100
What metrics were used to measure the Lean tidy model in the MiniF2F: a cross-system benchmark for formal Olympiad-level mathematics paper on the miniF2F-valid dataset?
Pass@8, Pass@64, Pass@1, Pass@100
What metrics were used to measure the LEGO-Prover ChatGPT model in the LEGO-Prover: Neural Theorem Proving with Growing Libraries paper on the miniF2F-valid dataset?
Pass@8, Pass@64, Pass@1, Pass@100
What metrics were used to measure the Lyra + GPT-4 model in the Lyra: Orchestrating Dual Correction in Automated Theorem Proving paper on the miniF2F-valid dataset?
Pass@8, Pass@64, Pass@1, Pass@100
What metrics were used to measure the DSP (62B Minerva informal) model in the Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs paper on the miniF2F-valid dataset?
Pass@8, Pass@64, Pass@1, Pass@100
What metrics were used to measure the GPT-f model in the Generative Language Modeling for Automated Theorem Proving paper on the Metamath set.mm dataset?
Percentage correct, Pass@32
What metrics were used to measure the MetaGen-IL + Holophrasm model in the Learning to Prove Theorems by Learning to Generate Theorems paper on the Metamath set.mm dataset?
Percentage correct, Pass@32
What metrics were used to measure the Holophrasm model in the Holophrasm: a neural Automated Theorem Prover for higher-order logic paper on the Metamath set.mm dataset?
Percentage correct, Pass@32
What metrics were used to measure the Evariste model in the HyperTree Proof Search for Neural Theorem Proving paper on the Metamath set.mm dataset?
Percentage correct, Pass@32
What metrics were used to measure the Thor + expert iteration on autoformalised theorems model in the Autoformalization with Large Language Models paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the Thor model in the Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the Lean Expert Iteration model in the Formal Mathematics Statement Curriculum Learning paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the ReProver model in the LeanDojo: Theorem Proving with Retrieval-Augmented Language Models paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the LLEMMA-7b model in the Llemma: An Open Language Model For Mathematics paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the LLEMMA-34b model in the Llemma: An Open Language Model For Mathematics paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the Lean GPT-f model in the MiniF2F: a cross-system benchmark for formal Olympiad-level mathematics paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the PACT (reproduced by Thor) model in the Proof Artifact Co-training for Theorem Proving with Language Models paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the COPRA + GPT-4 model in the A Language-Agent Approach to Formal Theorem-Proving paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the Sledgehammer + heuristics model in the Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the Lean tidy model in the MiniF2F: a cross-system benchmark for formal Olympiad-level mathematics paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the COPRA + GPT-3.5 model in the A Language-Agent Approach to Formal Theorem-Proving paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the Sledgehammer model in the Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the Metamath GPT-f model in the MiniF2F: a cross-system benchmark for formal Olympiad-level mathematics paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the Evariste model in the HyperTree Proof Search for Neural Theorem Proving paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the Evariste-7d model in the HyperTree Proof Search for Neural Theorem Proving paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the Evariste-1d model in the HyperTree Proof Search for Neural Theorem Proving paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the GPT-f model in the HyperTree Proof Search for Neural Theorem Proving paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the Lyra + GPT-4 model in the Lyra: Orchestrating Dual Correction in Automated Theorem Proving paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the LEGO-Prover ChatGPT model in the LEGO-Prover: Neural Theorem Proving with Growing Libraries paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the Decomposing the Enigma model in the Decomposing the Enigma: Subgoal-based Demonstration Learning for Formal Theorem Proving paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the DSP (540B Minerva informal) model in the Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs paper on the miniF2F-test dataset?
Pass@1, Pass@8, Pass@64, Pass@100
What metrics were used to measure the Evariste-7d model in the HyperTree Proof Search for Neural Theorem Proving paper on the miniF2F-curriculum dataset?
Pass@64
What metrics were used to measure the Evariste-1d model in the HyperTree Proof Search for Neural Theorem Proving paper on the miniF2F-curriculum dataset?
Pass@64
What metrics were used to measure the Evariste model in the HyperTree Proof Search for Neural Theorem Proving paper on the miniF2F-curriculum dataset?
Pass@64
What metrics were used to measure the GPT-f model in the HyperTree Proof Search for Neural Theorem Proving paper on the miniF2F-curriculum dataset?
Pass@64
What metrics were used to measure the ReProver model in the LeanDojo: Theorem Proving with Retrieval-Augmented Language Models paper on the LeanDojo Benchmark dataset?
Pass@1 on the random split, Pass@1 on the novel_premises split