prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the AnglE-LLaMA-7B model in the AnglE-optimized Text Embeddings paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the PromptEOL+CSE+OPT-13B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the PromptEOL+CSE+OPT-2.7B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the PromCSE-RoBERTa-large model in the Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the BigBird model in the Big Bird: Transformers for Longer Sequences paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the SimCSE-RoBERTalarge model in the SimCSE: Simple Contrastive Learning of Sentence Embeddings paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the Trans-Encoder-RoBERTa-large-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the Trans-Encoder-RoBERTa-large-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the BERT-LARGE model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the ASA + BERT-base model in the Adversarial Self-Attention for Language Understanding paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the Trans-Encoder-BERT-large-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the SRoBERTa-NLI-STSb-large model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the SBERT-STSb-base model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the Trans-Encoder-RoBERTa-base-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the SBERT-STSb-large model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the FNet-Large model in the FNet: Mixing Tokens with Fourier Transforms paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the Trans-Encoder-BERT-base-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the Pearl model in the paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the SBERT-NLI-large model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the Mirror-RoBERTa-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the Dino (STSb/̄🦕) model in the Generating Datasets with Pretrained Language Models paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the SRoBERTa-NLI-base model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the SBERT-NLI-base model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the Dino (STS/̄🦕) model in the Generating Datasets with Pretrained Language Models paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the Mirror-BERT-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the BERTlarge-flow (target) model in the On the Sentence Embeddings from Pre-trained Language Models paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the IS-BERT-NLI model in the An Unsupervised Sentence Embedding Method by Mutual Information Maximization paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the DeBERTa (large) model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the SMARTRoBERTa model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the SMART-BERT model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the STS Benchmark dataset?
Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation
What metrics were used to measure the PhraseTransformer model in the PhraseTransformer: An Incorporation of Local Context Information into Sequence-to-sequence Semantic Parsing paper on the ATIS dataset?
Accuracy
What metrics were used to measure the Tranx model in the TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation paper on the ATIS dataset?
Accuracy
What metrics were used to measure the ASN (Rabinovich et al., 2017) model in the Abstract Syntax Networks for Code Generation and Semantic Parsing paper on the ATIS dataset?
Accuracy
What metrics were used to measure the ZH15 (Zhao and Huang, 2015) model in the Type-Driven Incremental Semantic Parsing with Polymorphism paper on the ATIS dataset?
Accuracy
What metrics were used to measure the PERIN model in the ÚFAL at MRP 2020: Permutation-invariant Semantic Parsing in PERIN paper on the EDS (english, MRP 2020) dataset?
F1
What metrics were used to measure the HUJI-KU model in the HUJI-KU at MRP~2020: Two Transition-based Neural Parsers paper on the EDS (english, MRP 2020) dataset?
F1
What metrics were used to measure the coarse2fine model in the Coarse-to-Fine Decoding for Neural Semantic Parsing paper on the Geo dataset?
Accuracy
What metrics were used to measure the PhraseTransformer model in the PhraseTransformer: An Incorporation of Local Context Information into Sequence-to-sequence Semantic Parsing paper on the Geo dataset?
Accuracy
What metrics were used to measure the Tranx model in the TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation paper on the Geo dataset?
Accuracy
What metrics were used to measure the NL2SQL-BERT model in the Content Enhanced BERT-based Text-to-SQL Generation paper on the WikiSQL dataset?
Accuracy, Denotation accuracy (test)
What metrics were used to measure the TAPEX-Large (weak supervision) model in the TAPEX: Table Pre-training via Learning a Neural SQL Executor paper on the WikiSQL dataset?
Accuracy, Denotation accuracy (test)
What metrics were used to measure the ReasTAP-Large (weak supervision) model in the ReasTAP: Injecting Table Reasoning Skills During Pre-training via Synthetic Reasoning Examples paper on the WikiSQL dataset?
Accuracy, Denotation accuracy (test)
What metrics were used to measure the TAPAS-Large (weak supervision) model in the TAPAS: Weakly Supervised Table Parsing via Pre-training paper on the WikiSQL dataset?
Accuracy, Denotation accuracy (test)
What metrics were used to measure the PERIN model in the ÚFAL at MRP 2020: Permutation-invariant Semantic Parsing in PERIN paper on the DRG (german, MRP 2020) dataset?
F1
What metrics were used to measure the HUJI-KU model in the HUJI-KU at MRP~2020: Two Transition-based Neural Parsers paper on the DRG (german, MRP 2020) dataset?
F1
What metrics were used to measure the TAPEX-Large model in the TAPEX: Table Pre-training via Learning a Neural SQL Executor paper on the SQA dataset?
Denotation Accuracy, Accuracy
What metrics were used to measure the TAPAS-Large model in the TAPAS: Weakly Supervised Table Parsing via Pre-training paper on the SQA dataset?
Denotation Accuracy, Accuracy
What metrics were used to measure the RESDSQL-3B + NatSQL model in the RESDSQL: Decoupling Schema Linking and Skeleton Parsing for Text-to-SQL paper on the spider dataset?
Accuracy
What metrics were used to measure the LEVER + Codex model in the LEVER: Learning to Verify Language-to-Code Generation with Execution paper on the spider dataset?
Accuracy
What metrics were used to measure the RASAT+PICARD model in the RASAT: Integrating Relational Structures into Pretrained Seq2Seq Model for Text-to-SQL paper on the spider dataset?
Accuracy
What metrics were used to measure the Graphix-3B + PICARD model in the Graphix-T5: Mixing Pre-Trained Transformers with Graph-Aware Layers for Text-to-SQL Parsing paper on the spider dataset?
Accuracy
What metrics were used to measure the T5-3B + PICARD model in the PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models paper on the spider dataset?
Accuracy
What metrics were used to measure the SADGA + GAP model in the SADGA: Structure-Aware Dual Graph Aggregation Network for Text-to-SQL paper on the spider dataset?
Accuracy
What metrics were used to measure the RATSQL + GAP model in the Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training paper on the spider dataset?
Accuracy
What metrics were used to measure the RATSQL + Grammar-Augmented Pre-Training model in the GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing paper on the spider dataset?
Accuracy
What metrics were used to measure the RATSQL + BERT model in the RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers paper on the spider dataset?
Accuracy
What metrics were used to measure the Exact Set Matching model in the Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task paper on the spider dataset?
Accuracy
What metrics were used to measure the PERIN model in the ÚFAL at MRP 2020: Permutation-invariant Semantic Parsing in PERIN paper on the UCCA (german, MRP 2020) dataset?
F1
What metrics were used to measure the HUJI-KU model in the HUJI-KU at MRP~2020: Two Transition-based Neural Parsers paper on the UCCA (german, MRP 2020) dataset?
F1
What metrics were used to measure the ReaRev model in the ReaRev: Adaptive Reasoning for Question Answering over Knowledge Graphs paper on the WebQuestionsSP dataset?
Accuracy
What metrics were used to measure the NSM+h model in the Improving Multi-hop Knowledge Base Question Answering by Learning Intermediate Supervision Signals paper on the WebQuestionsSP dataset?
Accuracy
What metrics were used to measure the CBR-KBQA model in the Case-based Reasoning for Natural Language Queries over Knowledge Bases paper on the WebQuestionsSP dataset?
Accuracy
What metrics were used to measure the STAGG (Yih et al., 2016) model in the The Value of Semantic Parse Labeling for Knowledge Base Question Answering paper on the WebQuestionsSP dataset?
Accuracy
What metrics were used to measure the T5-11B (Raffel et al., 2020) model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the WebQuestionsSP dataset?
Accuracy
What metrics were used to measure the ReasonBERTR model in the ReasonBERT: Pre-trained to Reason with Distant Supervision paper on the GraphQuestions dataset?
F1 Score
What metrics were used to measure the PERIN model in the ÚFAL at MRP 2020: Permutation-invariant Semantic Parsing in PERIN paper on the PTG (english, MRP 2020) dataset?
F1
What metrics were used to measure the HUJI-KU model in the HUJI-KU at MRP~2020: Two Transition-based Neural Parsers paper on the PTG (english, MRP 2020) dataset?
F1
What metrics were used to measure the PERIN + RobeCzech model in the RobeCzech: Czech RoBERTa, a monolingual contextualized language representation model paper on the PTG (czech, MRP 2020) dataset?
F1
What metrics were used to measure the PERIN model in the ÚFAL at MRP 2020: Permutation-invariant Semantic Parsing in PERIN paper on the PTG (czech, MRP 2020) dataset?
F1
What metrics were used to measure the HUJI-KU model in the HUJI-KU at MRP~2020: Two Transition-based Neural Parsers paper on the PTG (czech, MRP 2020) dataset?
F1
What metrics were used to measure the HSP model in the Complex Question Decomposition for Semantic Parsing paper on the complexWebQuestions-V1.0 dataset?
EM
What metrics were used to measure the PERIN model in the ÚFAL at MRP 2020: Permutation-invariant Semantic Parsing in PERIN paper on the AMR (chinese, MRP 2020) dataset?
F1
What metrics were used to measure the HUJI-KU model in the HUJI-KU at MRP~2020: Two Transition-based Neural Parsers paper on the AMR (chinese, MRP 2020) dataset?
F1
What metrics were used to measure the PERIN model in the ÚFAL at MRP 2020: Permutation-invariant Semantic Parsing in PERIN paper on the UCCA (english, MRP 2020) dataset?
F1
What metrics were used to measure the HUJI-KU model in the HUJI-KU at MRP~2020: Two Transition-based Neural Parsers paper on the UCCA (english, MRP 2020) dataset?
F1
What metrics were used to measure the Dynamic Least-to-Most Prompting model in the Compositional Semantic Parsing with Large Language Models paper on the CFQ dataset?
Exact Match
What metrics were used to measure the LeAR model in the Learning Algebraic Recombination for Compositional Generalization paper on the CFQ dataset?
Exact Match
What metrics were used to measure the T5-3B w/ Intermediate Representations model in the Unlocking Compositional Generalization in Pre-trained Models Using Intermediate Representations paper on the CFQ dataset?
Exact Match
What metrics were used to measure the Hierarchical Poset Decoding model in the Hierarchical Poset Decoding for Compositional Generalization in Language paper on the CFQ dataset?
Exact Match
What metrics were used to measure the Universal Transformer model in the Measuring Compositional Generalization: A Comprehensive Method on Realistic Data paper on the CFQ dataset?
Exact Match
What metrics were used to measure the PERIN model in the ÚFAL at MRP 2020: Permutation-invariant Semantic Parsing in PERIN paper on the AMR (english, MRP 2020) dataset?
F1
What metrics were used to measure the HUJI-KU model in the HUJI-KU at MRP~2020: Two Transition-based Neural Parsers paper on the AMR (english, MRP 2020) dataset?
F1
What metrics were used to measure the ReasonBERTB model in the ReasonBERT: Pre-trained to Reason with Distant Supervision paper on the HotpotQA dataset?
F1-Score
What metrics were used to measure the Dater model in the Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning paper on the WikiTableQuestions dataset?
Accuracy (Test), Accuracy (Dev)
What metrics were used to measure the LEVER model in the LEVER: Learning to Verify Language-to-Code Generation with Execution paper on the WikiTableQuestions dataset?
Accuracy (Test), Accuracy (Dev)
What metrics were used to measure the Binder model in the Binding Language Models in Symbolic Languages paper on the WikiTableQuestions dataset?
Accuracy (Test), Accuracy (Dev)
What metrics were used to measure the OmniTab-Large model in the OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering paper on the WikiTableQuestions dataset?
Accuracy (Test), Accuracy (Dev)
What metrics were used to measure the ReasTAP-Large model in the ReasTAP: Injecting Table Reasoning Skills During Pre-training via Synthetic Reasoning Examples paper on the WikiTableQuestions dataset?
Accuracy (Test), Accuracy (Dev)
What metrics were used to measure the TAPEX-Large model in the TAPEX: Table Pre-training via Learning a Neural SQL Executor paper on the WikiTableQuestions dataset?
Accuracy (Test), Accuracy (Dev)
What metrics were used to measure the MAPO + TABERTLarge (K = 3) model in the TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data paper on the WikiTableQuestions dataset?
Accuracy (Test), Accuracy (Dev)
What metrics were used to measure the T5-3b(UnifiedSKG) model in the UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models paper on the WikiTableQuestions dataset?
Accuracy (Test), Accuracy (Dev)
What metrics were used to measure the TAPAS-Large (pre-trained on SQA) model in the TAPAS: Weakly Supervised Table Parsing via Pre-training paper on the WikiTableQuestions dataset?
Accuracy (Test), Accuracy (Dev)
What metrics were used to measure the Structured Attention model in the Learning Semantic Parsers from Denotations with Latent Structured Alignments and Abstract Programs paper on the WikiTableQuestions dataset?
Accuracy (Test), Accuracy (Dev)
What metrics were used to measure the PERIN model in the ÚFAL at MRP 2020: Permutation-invariant Semantic Parsing in PERIN paper on the DRG (english, MRP 2020) dataset?
F1
What metrics were used to measure the HUJI-KU model in the HUJI-KU at MRP~2020: Two Transition-based Neural Parsers paper on the DRG (english, MRP 2020) dataset?
F1
What metrics were used to measure the RedPenNet model in the RedPenNet for Grammatical Error Correction: Outputs to Tokens, Attentions to Spans paper on the WI-LOCNESS dataset?
F0.5
What metrics were used to measure the CNN Seq2Seq model in the A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction paper on the Restricted dataset?
F0.5
What metrics were used to measure the CNN Seq2Seq + Quality Estimation model in the Neural Quality Estimation of Grammatical Error Correction paper on the Restricted dataset?
F0.5
What metrics were used to measure the Transformer model in the Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task paper on the Restricted dataset?
F0.5
What metrics were used to measure the + BIFI with no critic model in the LM-Critic: Language Models for Unsupervised Grammatical Error Correction paper on the Restricted dataset?
F0.5