prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the SemEHR+WS (rules+BlueBERT) with tuning number of training data model in the Ontology-Driven and Weakly Supervised Rare Disease Identification from Clinical Notes paper on the Rare Diseases Mentions in MIMIC-III Radiology Reports (Text-to-UMLS) dataset?
F1
What metrics were used to measure the De Cao et al. (2021a) model in the Autoregressive Entity Retrieval paper on the Derczynski dataset?
Micro-F1, Micro-F1 strong
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the Derczynski dataset?
Micro-F1, Micro-F1 strong
What metrics were used to measure the van Hulst et al. (2020) model in the REL: An Entity Linker Standing on the Shoulders of Giants paper on the Derczynski dataset?
Micro-F1, Micro-F1 strong
What metrics were used to measure the Kolitsas et al. (2018) model in the End-to-End Neural Entity Linking paper on the Derczynski dataset?
Micro-F1, Micro-F1 strong
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the OKE-2016 dataset?
Micro-F1, Micro-F1 strong
What metrics were used to measure the E2E model in the End-to-End Neural Entity Linking paper on the OKE-2016 dataset?
Micro-F1, Micro-F1 strong
What metrics were used to measure the TPP (LayoutMask) model in the Reading Order Matters: Information Extraction from Visually-rich Documents by Token Path Prediction paper on the FUNSD dataset?
F1
What metrics were used to measure the DocTr model in the DocTr: Document Transformer for Structured Information Extraction in Documents paper on the FUNSD dataset?
F1
What metrics were used to measure the SINGU_GROUP model in the DGCN Based Solution for Entity Linking on Visual Rich Document paper on the FUNSD dataset?
F1
What metrics were used to measure the SERA model in the Entity Relation Extraction as Dependency Parsing in Visually Rich Documents paper on the FUNSD dataset?
F1
What metrics were used to measure the Doc2Graph model in the Doc2Graph: a Task Agnostic Document Understanding Framework based on Graph Neural Networks paper on the FUNSD dataset?
F1
What metrics were used to measure the DeepStruct multi-task model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the CoNLL03 dataset?
F1
What metrics were used to measure the DeepStruct multi-task w/ finetune model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the CoNLL03 dataset?
F1
What metrics were used to measure the Deepstruct zero-shot model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the CoNLL03 dataset?
F1
What metrics were used to measure the CamemBERT (subword masking) model in the CamemBERT: a Tasty French Language Model paper on the French Treebank dataset?
F1, Precision, Recall
What metrics were used to measure the BLSTM-CNN-Char (SparkNLP) model in the Biomedical Named Entity Recognition at Scale paper on the BioNLP13-CG dataset?
F1
What metrics were used to measure the BertForTokenClassification (Spark NLP) model in the Accurate clinical and biomedical Named entity recognition at scale paper on the BioNLP13-CG dataset?
F1
What metrics were used to measure the aimped model in the paper on the BioNLP13-CG dataset?
F1
What metrics were used to measure the saattrupdan/nbailab-base-ner-scandi model in the paper on the DaNE dataset?
Micro-average F1
What metrics were used to measure the DaCy-large model in the DaCy: A Unified Framework for Danish NLP paper on the DaNE dataset?
Micro-average F1
What metrics were used to measure the UNER XML-R model in the Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark paper on the UNER v1 - PUD (Swedish) dataset?
F1 (micro)
What metrics were used to measure the ACE + document-context model in the Automated Concatenation of Embeddings for Structured Prediction paper on the CoNLL 2003 (German) dataset?
F1
What metrics were used to measure the FLERT XLM-R model in the FLERT: Document-Level Features for Named Entity Recognition paper on the CoNLL 2003 (German) dataset?
F1
What metrics were used to measure the Cross-sentence context (CMV) model in the Exploring Cross-sentence Contexts for Named Entity Recognition with BERT paper on the CoNLL 2003 (German) dataset?
F1
What metrics were used to measure the ACE model in the Automated Concatenation of Embeddings for Structured Prediction paper on the CoNLL 2003 (German) dataset?
F1
What metrics were used to measure the Biaffine-NER model in the Named Entity Recognition as Dependency Parsing paper on the CoNLL 2003 (German) dataset?
F1
What metrics were used to measure the Straková et al., 2019 model in the Neural Architectures for Nested NER through Linearization paper on the CoNLL 2003 (German) dataset?
F1
What metrics were used to measure the HGN model in the Hero-Gang Neural Model For Named Entity Recognition paper on the OntoNotes 5.0 dataset?
Average F1
What metrics were used to measure the PubMedBERT+MLP+CRF model in the Chemical identification and indexing in PubMed full-text articles using deep learning and heuristics paper on the BC7 NLM-Chem dataset?
F1-score (strict)
What metrics were used to measure the PubMedBERT+MLP+CRF model in the Chemical detection and indexing in PubMed full text articles using deep learning and rule-based methods paper on the BC7 NLM-Chem dataset?
F1-score (strict)
What metrics were used to measure the FT-Bangla BERT Large model in the BanglaCoNER: Towards Robust Bangla Complex Named Entity Recognition paper on the SemEval 2022-2023 - BanglaCoNER dataset?
F1
What metrics were used to measure the BERT model in the Language Modelling with Pixels paper on the MasakhaNER dataset?
ENG , Params, AMH, IBO, HAU, KIN, LUG, LUO, PCM, SWA, WOL, YOR
What metrics were used to measure the PIXEL model in the Language Modelling with Pixels paper on the MasakhaNER dataset?
ENG , Params, AMH, IBO, HAU, KIN, LUG, LUO, PCM, SWA, WOL, YOR
What metrics were used to measure the ACE + document-context model in the Automated Concatenation of Embeddings for Structured Prediction paper on the CoNLL 2002 (Dutch) dataset?
F1
What metrics were used to measure the FLERT XLM-R model in the FLERT: Document-Level Features for Named Entity Recognition paper on the CoNLL 2002 (Dutch) dataset?
F1
What metrics were used to measure the ACE model in the Automated Concatenation of Embeddings for Structured Prediction paper on the CoNLL 2002 (Dutch) dataset?
F1
What metrics were used to measure the Biaffine-NER model in the Named Entity Recognition as Dependency Parsing paper on the CoNLL 2002 (Dutch) dataset?
F1
What metrics were used to measure the Cross-sentence context (CMV) model in the Exploring Cross-sentence Contexts for Named Entity Recognition with BERT paper on the CoNLL 2002 (Dutch) dataset?
F1
What metrics were used to measure the Straková et al., 2019 model in the Neural Architectures for Nested NER through Linearization paper on the CoNLL 2002 (Dutch) dataset?
F1
What metrics were used to measure the BINDER model in the Optimizing Bi-Encoder for Named Entity Recognition via Contrastive Learning paper on the BC5CDR dataset?
F1
What metrics were used to measure the ConNER model in the Enhancing Label Consistency on Document-level Named Entity Recognition paper on the BC5CDR dataset?
F1
What metrics were used to measure the CL-L2 model in the Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning paper on the BC5CDR dataset?
F1
What metrics were used to measure the aimped model in the paper on the BC5CDR dataset?
F1
What metrics were used to measure the BertForTokenClassification (Spark NLP) model in the Accurate clinical and biomedical Named entity recognition at scale paper on the BC5CDR dataset?
F1
What metrics were used to measure the BioLinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the BC5CDR dataset?
F1
What metrics were used to measure the ELECTRAMed model in the ELECTRAMed: a new pre-trained language representation model for biomedical NLP paper on the BC5CDR dataset?
F1
What metrics were used to measure the BLSTM-CNN-Char (SparkNLP) model in the Biomedical Named Entity Recognition at Scale paper on the BC5CDR dataset?
F1
What metrics were used to measure the Spark NLP model in the Biomedical Named Entity Recognition at Scale paper on the BC5CDR dataset?
F1
What metrics were used to measure the BioFLAIR model in the BioFLAIR: Pretrained Pooled Contextualized Embeddings for Biomedical Sequence Labeling Tasks paper on the BC5CDR dataset?
F1
What metrics were used to measure the SciBERT (SciVocab) model in the SciBERT: A Pretrained Language Model for Scientific Text paper on the BC5CDR dataset?
F1
What metrics were used to measure the GoLLIE model in the GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction paper on the BC5CDR dataset?
F1
What metrics were used to measure the SciBERT (Base Vocab) model in the SciBERT: A Pretrained Language Model for Scientific Text paper on the BC5CDR dataset?
F1
What metrics were used to measure the RDANER model in the A Robust and Domain-Adaptive Approach for Low-Resource Named Entity Recognition paper on the BC5CDR dataset?
F1
What metrics were used to measure the CollaboNet model in the CollaboNet: collaboration of deep neural networks for biomedical named entity recognition paper on the BC5CDR dataset?
F1
What metrics were used to measure the BERT-CRF model in the Focusing on Potential Named Entities During Active Label Acquisition paper on the BC5CDR dataset?
F1
What metrics were used to measure the UNER XML-R model in the Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark paper on the UNER v1 (Serbian) dataset?
F1 (micro)
What metrics were used to measure the BERT-CRF (Replicated in AdaSeq) model in the Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning paper on the CMeEE dataset?
F1, Micro F1
What metrics were used to measure the MacBERT-large model in the CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark paper on the CMeEE dataset?
F1, Micro F1
What metrics were used to measure the cfilt/HiNER-original-xlm-roberta-large model in the HiNER: A Large Hindi Named Entity Recognition Dataset paper on the HiNER-original dataset?
F1-score (Weighted)
What metrics were used to measure the cfilt/HiNER-original-muril-base-cased model in the HiNER: A Large Hindi Named Entity Recognition Dataset paper on the HiNER-original dataset?
F1-score (Weighted)
What metrics were used to measure the BiLSTM-CRF with ELMo model in the Using Similarity Measures to Select Pretraining Data for NER paper on the WetLab dataset?
F1
What metrics were used to measure the UNER XML-R model in the Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark paper on the UNER v1 - PUD (Chinese) dataset?
F1 (micro)
What metrics were used to measure the UNER XML-R model in the Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark paper on the UNER v1 (English) dataset?
F1 (micro)
What metrics were used to measure the UNER XML-R model in the Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark paper on the UNER v1 (Danish) dataset?
F1 (micro)
What metrics were used to measure the Spark NLP model in the Biomedical Named Entity Recognition at Scale paper on the BC2GM dataset?
F1
What metrics were used to measure the BioDistilBERT model in the On the Effectiveness of Compact Biomedical Transformers paper on the BC2GM dataset?
F1
What metrics were used to measure the CompactBioBERT model in the On the Effectiveness of Compact Biomedical Transformers paper on the BC2GM dataset?
F1
What metrics were used to measure the DistilBioBERT model in the On the Effectiveness of Compact Biomedical Transformers paper on the BC2GM dataset?
F1
What metrics were used to measure the HGN model in the Hero-Gang Neural Model For Named Entity Recognition paper on the BC2GM dataset?
F1
What metrics were used to measure the BioKMNER + BioBERT model in the Improving Biomedical Named Entity Recognition with Syntactic Information paper on the BC2GM dataset?
F1
What metrics were used to measure the BioMobileBERT model in the On the Effectiveness of Compact Biomedical Transformers paper on the BC2GM dataset?
F1
What metrics were used to measure the BioLinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the BC2GM dataset?
F1
What metrics were used to measure the KeBioLM model in the Improving Biomedical Pretrained Language Models with Knowledge paper on the BC2GM dataset?
F1
What metrics were used to measure the PubMedBERT uncased model in the Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing paper on the BC2GM dataset?
F1
What metrics were used to measure the XLM-RoBERTa model in the Analysis Of Contextual and Non-Contextual Word Embedding Models For Hindi NER With Web Application For Data Collection paper on the IECSIL FIRE-2018 Shared Task dataset?
Average F1
What metrics were used to measure the UNER XML-R model in the Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark paper on the UNER v1 (Slovak) dataset?
F1 (micro)
What metrics were used to measure the W2V2-L-LL60K (pipeline approach, uses LM) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset?
F1 (%), label-F1 (%), Text model
What metrics were used to measure the W2V2-B-LS960 (pipeline approach, uses LM) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset?
F1 (%), label-F1 (%), Text model
What metrics were used to measure the Wav2Seq (from HuBERT-large) model in the Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages paper on the SLUE dataset?
F1 (%), label-F1 (%), Text model
What metrics were used to measure the W2V2-L-LL60K (e2e approach, uses LM) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset?
F1 (%), label-F1 (%), Text model
What metrics were used to measure the W2V2-B-LS960 (e2e approach, uses LM) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset?
F1 (%), label-F1 (%), Text model
What metrics were used to measure the HuBERT-B-LS960 (e2e approach, uses LM) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset?
F1 (%), label-F1 (%), Text model
What metrics were used to measure the W2V2-B-VP100K (e2e approach, uses LM) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset?
F1 (%), label-F1 (%), Text model
What metrics were used to measure the W2V2-L-LL60K (pipeline approach) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset?
F1 (%), label-F1 (%), Text model
What metrics were used to measure the W2V2-L-LL60K (e2e approach) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset?
F1 (%), label-F1 (%), Text model
What metrics were used to measure the W2V2-B-LS960 (e2e approach) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset?
F1 (%), label-F1 (%), Text model
What metrics were used to measure the HuBERT-B-LS960 (e2e approach) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset?
F1 (%), label-F1 (%), Text model
What metrics were used to measure the W2V2-B-LS960 (pipeline approach) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset?
F1 (%), label-F1 (%), Text model
What metrics were used to measure the W2V2-B-VP100K (e2e approach) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset?
F1 (%), label-F1 (%), Text model
What metrics were used to measure the UNER XML-R model in the Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark paper on the UNER v1 - PUD (English) dataset?
F1 (micro)
What metrics were used to measure the UNER XML-R model in the Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark paper on the UNER v1 - PUD (Portuguese) dataset?
F1 (micro)
What metrics were used to measure the BioMegatron model in the BioMegatron: Larger Biomedical Domain Language Model paper on the BC5CDR-disease dataset?
F1
What metrics were used to measure the HGN model in the Hero-Gang Neural Model For Named Entity Recognition paper on the BC5CDR-disease dataset?
F1
What metrics were used to measure the SciFive-Large model in the SciFive: a text-to-text transformer model for biomedical literature paper on the BC5CDR-disease dataset?
F1
What metrics were used to measure the NCBI_BERT(base) (P) model in the Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets paper on the BC5CDR-disease dataset?
F1
What metrics were used to measure the BioLinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the BC5CDR-disease dataset?
F1
What metrics were used to measure the KeBioLM model in the Improving Biomedical Pretrained Language Models with Knowledge paper on the BC5CDR-disease dataset?
F1
What metrics were used to measure the BioDistilBERT model in the On the Effectiveness of Compact Biomedical Transformers paper on the BC5CDR-disease dataset?
F1
What metrics were used to measure the DistilBioBERT model in the On the Effectiveness of Compact Biomedical Transformers paper on the BC5CDR-disease dataset?
F1