prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the CorefBERT model in the SPOT: Knowledge-Enhanced Language Representations for Information Extraction paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the KnowBert-W+W model in the Knowledge Enhanced Contextual Word Representations paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the KnowBERT model in the SPOT: Knowledge-Enhanced Language Representations for Information Extraction paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the Entity-Aware BERT model in the Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the SpanBERT model in the SPOT: Knowledge-Enhanced Language Representations for Information Extraction paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the RoBERTa model in the SPOT: Knowledge-Enhanced Language Representations for Information Extraction paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the Att-Pooling-CNN model in the Relation Classification via Multi-Level Attention CNNs paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the SpanRel model in the Generalizing Natural Language Analysis through Span-relation Representations paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the TRE model in the Improving Relation Extraction by Pre-trained Language Representations paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the Entity Attention Bi-LSTM model in the Semantic Relation Classification via Bidirectional LSTM Networks with Entity-aware Attention using Latent Entity Typing paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the Attention CNN model in the Attention-Based Convolutional Neural Network for Semantic Relation Extraction paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the CR-CNN model in the Classifying Relations by Ranking with Convolutional Neural Networks paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the Attention Bi-LSTM model in the Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the CNN model in the Relation Classification via Convolutional Deep Neural Network paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the Bi-LSTM model in the Bidirectional Long Short-Term Memory Networks for Relation Classification paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the LLM-QA4R (Zero-shot) model in the Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the TRE model in the Improving Relation Extraction by Pre-trained Language Representations paper on the SemEval-2010 Task 8 dataset?
F1
What metrics were used to measure the KLG model in the Reviewing Labels: Label Graph Network with Top-k Prediction Set for Relation Extraction paper on the TACRED-Revisited dataset?
F1
What metrics were used to measure the LLM-QA4R (Zero-shot) model in the Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors paper on the TACRED-Revisited dataset?
F1
What metrics were used to measure the BiTT model in the A Bidirectional Tree Tagging Scheme for Joint Medical Relation Extraction paper on the DuIE dataset?
F1
What metrics were used to measure the WDec model in the Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction paper on the NYT24 dataset?
F1
What metrics were used to measure the HRLRE model in the A Hierarchical Framework for Relation Extraction with Reinforcement Learning paper on the NYT24 dataset?
F1
What metrics were used to measure the Spark NLP model in the Deeper Clinical Document Understanding Using Relation Extraction paper on the PGR dataset?
Macro F1
What metrics were used to measure the SAISORE+CR+ET-SciBERT model in the SAIS: Supervising and Augmenting Intermediate Steps for Document-Level Relation Extraction paper on the GDA dataset?
F1
What metrics were used to measure the Dense-CCNet-SciBERTbase model in the A Densely Connected Criss-Cross Attention Network for Document-level Relation Extraction paper on the GDA dataset?
F1
What metrics were used to measure the DRE-MIR-SciBERT model in the A Masked Image Reconstruction Network for Document-level Relation Extraction paper on the GDA dataset?
F1
What metrics were used to measure the DocuNet-SciBERTbase model in the Document-level Relation Extraction as Semantic Segmentation paper on the GDA dataset?
F1
What metrics were used to measure the seq2rel (entity hinting) model in the A sequence-to-sequence approach for document-level relation extraction paper on the GDA dataset?
F1
What metrics were used to measure the CGM2IR-SciBERTbase model in the Document-level Relation Extraction with Context Guided Mention Integration and Inter-pair Reasoning paper on the GDA dataset?
F1
What metrics were used to measure the SciBERT-ATLOPBASE model in the Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling paper on the GDA dataset?
F1
What metrics were used to measure the SSANBiaffine model in the Entity Structure Within and Throughout: Modeling Mention Dependencies for Document-Level Relation Extraction paper on the GDA dataset?
F1
What metrics were used to measure the LSR w/o MDP Nodes model in the Reasoning with Latent Structure Refinement for Document-Level Relation Extraction paper on the GDA dataset?
F1
What metrics were used to measure the KB-both model in the Injecting Knowledge Base Information into End-to-End Joint Entity and Relation Extraction and Coreference Resolution paper on the DWIE dataset?
F1-Hard
What metrics were used to measure the Joint+AttProp model in the DWIE: an entity-centric dataset for multi-task document-level information extraction paper on the DWIE dataset?
F1-Hard
What metrics were used to measure the Stacked_LinkedBERT model in the Exploiting Unary Relations with Stacked Learning for Relation Extraction paper on the LPSC-hasproperty dataset?
F1 (micro)
What metrics were used to measure the EXOBRAIN model in the Improving Sentence-Level Relation Extraction through Curriculum Learning paper on the Re-TACRED dataset?
F1
What metrics were used to measure the RoBERTa-large-typed-marker model in the An Improved Baseline for Sentence-level Relation Extraction paper on the Re-TACRED dataset?
F1
What metrics were used to measure the GenPT (RoBERTa) model in the Generative Prompt Tuning for Relation Classification paper on the Re-TACRED dataset?
F1
What metrics were used to measure the REBEL (no entity type marker) model in the REBEL: Relation Extraction By End-to-end Language generation paper on the Re-TACRED dataset?
F1
What metrics were used to measure the SpanBERT model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the Re-TACRED dataset?
F1
What metrics were used to measure the C-GCN model in the Graph Convolution over Pruned Dependency Trees Improves Relation Extraction paper on the Re-TACRED dataset?
F1
What metrics were used to measure the PA-LSTM model in the Position-aware Attention and Supervised Data Improve Slot Filling paper on the Re-TACRED dataset?
F1
What metrics were used to measure the LLM-QA4R (Zero-shot) model in the Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors paper on the Re-TACRED dataset?
F1
What metrics were used to measure the ReRe model in the Revisiting the Negative Data of Distantly Supervised Relation Extraction paper on the NYT21 dataset?
F1
What metrics were used to measure the ReRe (exact) model in the Revisiting the Negative Data of Distantly Supervised Relation Extraction paper on the NYT21 dataset?
F1
What metrics were used to measure the TPLinker(exact) model in the Revisiting the Negative Data of Distantly Supervised Relation Extraction paper on the NYT21 dataset?
F1
What metrics were used to measure the CasRel (exact) model in the Revisiting the Negative Data of Distantly Supervised Relation Extraction paper on the NYT21 dataset?
F1
What metrics were used to measure the PFN model in the A Partition Filter Network for Joint Entity and Relation Extraction paper on the ADE Corpus dataset?
NER Macro F1, RE+ Macro F1
What metrics were used to measure the ContextAtt model in the Context-Aware Representations for Knowledge Base Relation Extraction paper on the Wikipedia-Wikidata relations dataset?
Error rate
What metrics were used to measure the SciBert (Finetune) model in the SciBERT: A Pretrained Language Model for Scientific Text paper on the ChemProt dataset?
F1, Micro F1
What metrics were used to measure the BioM-BERT model in the BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA paper on the ChemProt dataset?
F1, Micro F1
What metrics were used to measure the BioLinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the ChemProt dataset?
F1, Micro F1
What metrics were used to measure the SciFive Large model in the SciFive: a text-to-text transformer model for biomedical literature paper on the ChemProt dataset?
F1, Micro F1
What metrics were used to measure the KeBioLM model in the Improving Biomedical Pretrained Language Models with Knowledge paper on the ChemProt dataset?
F1, Micro F1
What metrics were used to measure the BioT5X (base) model in the SciFive: a text-to-text transformer model for biomedical literature paper on the ChemProt dataset?
F1, Micro F1
What metrics were used to measure the BioMegatron model in the BioMegatron: Larger Biomedical Domain Language Model paper on the ChemProt dataset?
F1, Micro F1
What metrics were used to measure the BioBERT model in the BioBERT: a pre-trained biomedical language representation model for biomedical text mining paper on the ChemProt dataset?
F1, Micro F1
What metrics were used to measure the NCBI_BERT(large) (P) model in the Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets paper on the ChemProt dataset?
F1, Micro F1
What metrics were used to measure the SciBERT (Base Vocab) model in the SciBERT: A Pretrained Language Model for Scientific Text paper on the ChemProt dataset?
F1, Micro F1
What metrics were used to measure the ELECTRAMed model in the ELECTRAMed: a new pre-trained language representation model for biomedical NLP paper on the ChemProt dataset?
F1, Micro F1
What metrics were used to measure the PubMedBERT uncased model in the Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing paper on the ChemProt dataset?
F1, Micro F1
What metrics were used to measure the CharacterBERT (base, medical) model in the CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters paper on the ChemProt dataset?
F1, Micro F1
What metrics were used to measure the SAISORE+CR+ET-SciBERT model in the SAIS: Supervising and Augmenting Intermediate Steps for Document-Level Relation Extraction paper on the CDR dataset?
F1
What metrics were used to measure the Dense-CCNet-SciBERTbase model in the A Densely Connected Criss-Cross Attention Network for Document-level Relation Extraction paper on the CDR dataset?
F1
What metrics were used to measure the DRE-MIR-SciBERT model in the A Masked Image Reconstruction Network for Document-level Relation Extraction paper on the CDR dataset?
F1
What metrics were used to measure the DocuNet-SciBERTbase model in the Document-level Relation Extraction as Semantic Segmentation paper on the CDR dataset?
F1
What metrics were used to measure the CGM2IR-SciBERTbase model in the Document-level Relation Extraction with Context Guided Mention Integration and Inter-pair Reasoning paper on the CDR dataset?
F1
What metrics were used to measure the SciBERT-ATLOPBASE model in the Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling paper on the CDR dataset?
F1
What metrics were used to measure the SSANBiaffine model in the Entity Structure Within and Throughout: Modeling Mention Dependencies for Document-Level Relation Extraction paper on the CDR dataset?
F1
What metrics were used to measure the seq2rel (entity hinting) model in the A sequence-to-sequence approach for document-level relation extraction paper on the CDR dataset?
F1
What metrics were used to measure the LSR w/o MDP Nodes model in the Reasoning with Latent Structure Refinement for Document-Level Relation Extraction paper on the CDR dataset?
F1
What metrics were used to measure the BioLinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the GAD dataset?
F1, Micro F1
What metrics were used to measure the KeBioLM model in the Improving Biomedical Pretrained Language Models with Knowledge paper on the GAD dataset?
F1, Micro F1
What metrics were used to measure the PubMedBERT uncased model in the Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing paper on the GAD dataset?
F1, Micro F1
What metrics were used to measure the BioLinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the DDI dataset?
F1, Micro F1
What metrics were used to measure the KeBioLM model in the Improving Biomedical Pretrained Language Models with Knowledge paper on the DDI dataset?
F1, Micro F1
What metrics were used to measure the PubMedBERT uncased model in the Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing paper on the DDI dataset?
F1, Micro F1
What metrics were used to measure the DeepStruct multi-task w/ finetune model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the UNiST (LARGE) model in the Unified Semantic Typing with Meaningful Label Inference paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the RE-MC model in the Enhancing Targeted Minority Class Prediction in Sentence-Level Relation Extraction paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the GenPT (T5) model in the Generative Prompt Tuning for Relation Classification paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the RECENT+SpanBERT model in the Relation Classification with Entity Type Restriction paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the SuRE (PEGASUS-large) model in the Summarization as Indirect Supervision for Relation Extraction paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the EXOBRAIN model in the Improving Sentence-Level Relation Extraction through Curriculum Learning paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the Relation Reduction model in the Relation Classification as Two-way Span-Prediction paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the RoBERTa-large-typed-marker model in the An Improved Baseline for Sentence-level Relation Extraction paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the NLI_DeBERTa model in the Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the Noise-robust Co-regularization + BERT-large model in the Learning from Noisy Labels for Entity-Centric Information Extraction paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the DeNERT-KG model in the DeNERT-KG: Named Entity and Relation Extraction Model Using DQN, Knowledge Graph, and BERT paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the K-ADAPTER (F+L) model in the K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the TANL model in the Structured Prediction as Translation between Augmented Natural Languages paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the KEPLER model in the KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the BERTEM+MTB model in the Matching the Blanks: Distributional Similarity for Relation Learning paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the KnowBert-W+W model in the Knowledge Enhanced Contextual Word Representations paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the DG-SpanBERT-large model in the Efficient long-distance relation extraction with DG-SpanBERT paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the RELA model in the Sequence Generation with Label Augmentation for Relation Extraction paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the NLI_RoBERTa model in the Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the SpanBERT-large model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the GDPNet model in the GDPNet: Refining Latent Multi-View Graph for Relation Extraction paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)
What metrics were used to measure the Contrastive Pre-training model in the Learning from Context or Names? An Empirical Study on Neural Relation Extraction paper on the TACRED dataset?
F1, F1 (10% Few-Shot), F1 (5% Few-Shot), F1 (1% Few-Shot), F1 (Zero-Shot)