prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the UMAEA (w/o surf) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the dbp15k fr-en dataset? | Hits@1 |
What metrics were used to measure the MEAformer (w/o surf) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the dbp15k fr-en dataset? | Hits@1 |
What metrics were used to measure the Zero Shot model in the A Critical Assessment of State-of-the-Art in Entity Alignment paper on the dbp15k fr-en dataset? | Hits@1 |
What metrics were used to measure the UMAEA (w/o surf & iter ) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the dbp15k fr-en dataset? | Hits@1 |
What metrics were used to measure the EVA model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the dbp15k fr-en dataset? | Hits@1 |
What metrics were used to measure the MEAformer (w/o surf & iter ) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the dbp15k fr-en dataset? | Hits@1 |
What metrics were used to measure the MTransE w/ background ranking model in the Knowing the No-match: Entity Alignment with Dangling Cases paper on the DBP2.0 zh-en dataset? | dangling entity detection F1, Entity Alignment (Consolidated) F1 |
What metrics were used to measure the AliNet w/ background ranking model in the Knowing the No-match: Entity Alignment with Dangling Cases paper on the DBP2.0 zh-en dataset? | dangling entity detection F1, Entity Alignment (Consolidated) F1 |
What metrics were used to measure the MEAformer (seed 60%) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the FBDB15k dataset? | Hits@1 |
What metrics were used to measure the MEAformer (seed 60% w/o iter) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the FBDB15k dataset? | Hits@1 |
What metrics were used to measure the MEAformer (seed 40%) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the FBDB15k dataset? | Hits@1 |
What metrics were used to measure the MEAformer (seed 40% w/o iter) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the FBDB15k dataset? | Hits@1 |
What metrics were used to measure the MEAformer (seed 20%) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the FBDB15k dataset? | Hits@1 |
What metrics were used to measure the MEAformer (seed 20% w/o iter) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the FBDB15k dataset? | Hits@1 |
What metrics were used to measure the RELIC + CoNLL-Aida tuning model in the Learning Cross-Context Entity Representations from Text paper on the CoNLL-Aida dataset? | Accuracy |
What metrics were used to measure the Raiman & Raiman 2018 model in the DeepType: Multilingual Entity Linking by Neural Type System Evolution paper on the CoNLL-Aida dataset? | Accuracy |
What metrics were used to measure the Radhakrishnan et al. 2018 model in the ELDEN: Improved Entity Linking Using Densified Knowledge Graphs paper on the CoNLL-Aida dataset? | Accuracy |
What metrics were used to measure the SemEHR+WS (rules+BlueBERT) with tuning number of training data model in the Ontology-Driven and Weakly Supervised Rare Disease Identification from Clinical Notes paper on the Rare Diseases Mentions in MIMIC-III (Text-to-UMLS) dataset? | F1 |
What metrics were used to measure the SemEHR+WS (rules+BlueBERT) model in the Rare Disease Identification from Clinical Notes with Ontologies and Weak Supervision paper on the Rare Diseases Mentions in MIMIC-III (Text-to-UMLS) dataset? | F1 |
What metrics were used to measure the GENRE model in the Autoregressive Entity Retrieval paper on the KILT: WNED-CWEB dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the BLINK model in the paper on the KILT: WNED-CWEB dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the T5-base model in the KILT: a Benchmark for Knowledge Intensive Language Tasks paper on the KILT: WNED-CWEB dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the BART model in the paper on the KILT: WNED-CWEB dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the RAG model in the paper on the KILT: WNED-CWEB dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the multitask model in the paper on the KILT: WNED-CWEB dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the BART + DPR model in the paper on the KILT: WNED-CWEB dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the chriskuei model in the paper on the KILT: WNED-CWEB dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the Multi-task DPR model in the paper on the KILT: WNED-CWEB dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the multi-task small model in the paper on the KILT: WNED-CWEB dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the WebQSP-WD dataset? | F1 |
What metrics were used to measure the VCG model in the Mixing Context Granularities for Improved Entity Linking on Question Answering Data across Entity Categories paper on the WebQSP-WD dataset? | F1 |
What metrics were used to measure the ArboEL model in the Entity Linking via Explicit Mention-Mention Coreference Modeling paper on the ZESHEL dataset? | Unnormalized Accuracy, Recall@64 |
What metrics were used to measure the ArboEL-dual model in the Entity Linking via Explicit Mention-Mention Coreference Modeling paper on the ZESHEL dataset? | Unnormalized Accuracy, Recall@64 |
What metrics were used to measure the baseline model in the WikiGUM: Exhaustive Entity Linking for Wikification in 12 Genres paper on the GUM dataset? | F1 |
What metrics were used to measure the SpEL-large (2023) model in the SpEL: Structured Prediction for Entity Linking paper on the AIDA/testc dataset? | Micro-F1 strong |
What metrics were used to measure the SpEL-base (2023) model in the SpEL: Structured Prediction for Entity Linking paper on the AIDA/testc dataset? | Micro-F1 strong |
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the KORE50 dataset? | Micro-F1, Micro-F1 strong |
What metrics were used to measure the Raiman & Raiman 2018 model in the DeepType: Multilingual Entity Linking by Neural Type System Evolution paper on the TAC-KBP 2010 dataset? | Accuracy |
What metrics were used to measure the RELIC + CoNLL-Aida tuning model in the Learning Cross-Context Entity Representations from Text paper on the TAC-KBP 2010 dataset? | Accuracy |
What metrics were used to measure the Radhakrishnan et al. 2018 model in the ELDEN: Improved Entity Linking Using Densified Knowledge Graphs paper on the TAC-KBP 2010 dataset? | Accuracy |
What metrics were used to measure the Kannan Ravi et al. (2021) model in the CHOLAN: A Modular Approach for Neural Entity Linking on Wikipedia and Wikidata paper on the MSNBC dataset? | Micro-F1, Micro-F1 strong |
What metrics were used to measure the De Cao et al. (2021a) model in the Autoregressive Entity Retrieval paper on the MSNBC dataset? | Micro-F1, Micro-F1 strong |
What metrics were used to measure the Kolitsas et al. (2018) model in the End-to-End Neural Entity Linking paper on the MSNBC dataset? | Micro-F1, Micro-F1 strong |
What metrics were used to measure the van Hulst et al. (2020) model in the REL: An Entity Linker Standing on the Shoulders of Giants paper on the MSNBC dataset? | Micro-F1, Micro-F1 strong |
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the MSNBC dataset? | Micro-F1, Micro-F1 strong |
What metrics were used to measure the transformers model in the Word Sense Disambiguation with Transformer Models paper on the WiC-TSV dataset? | Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific |
What metrics were used to measure the CTLR model in the CTLR@WiC-TSV: Target Sense Verification using Marked Inputs andPre-trained Models paper on the WiC-TSV dataset? | Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific |
What metrics were used to measure the GlossBert-ws model in the GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge paper on the WiC-TSV dataset? | Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific |
What metrics were used to measure the Bert-base model in the WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context paper on the WiC-TSV dataset? | Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific |
What metrics were used to measure the Unsupervised Bert model in the WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context paper on the WiC-TSV dataset? | Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific |
What metrics were used to measure the FastText model in the WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context paper on the WiC-TSV dataset? | Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific |
What metrics were used to measure the All true model in the WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context paper on the WiC-TSV dataset? | Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific |
What metrics were used to measure the Human model in the WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context paper on the WiC-TSV dataset? | Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific |
What metrics were used to measure the Sieve-based+SapBERT model in the Chemical identification and indexing in PubMed full-text articles using deep learning and heuristics paper on the BC7 NLM-Chem dataset? | F1-score (strict) |
What metrics were used to measure the Sieve-based model in the Chemical detection and indexing in PubMed full text articles using deep learning and rule-based methods paper on the BC7 NLM-Chem dataset? | F1-score (strict) |
What metrics were used to measure the ArboEL model in the Entity Linking via Explicit Mention-Mention Coreference Modeling paper on the MedMentions dataset? | Accuracy, Recall@64 |
What metrics were used to measure the ArboEL-dual model in the Entity Linking via Explicit Mention-Mention Coreference Modeling paper on the MedMentions dataset? | Accuracy, Recall@64 |
What metrics were used to measure the BioBART model in the BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model paper on the MedMentions dataset? | Accuracy, Recall@64 |
What metrics were used to measure the SemEHR+WS (rules+BlueBERT) with tuning number of training data model in the Ontology-Driven and Weakly Supervised Rare Disease Identification from Clinical Notes paper on the Rare Diseases Mentions in MIMIC-III dataset? | F1 |
What metrics were used to measure the SemEHR+WS (rules+BlueBERT) model in the Rare Disease Identification from Clinical Notes with Ontologies and Weak Supervision paper on the Rare Diseases Mentions in MIMIC-III dataset? | F1 |
What metrics were used to measure the GENRE model in the Autoregressive Entity Retrieval paper on the KILT: AIDA-YAGO2 dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the BLINK model in the paper on the KILT: AIDA-YAGO2 dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the BART model in the paper on the KILT: AIDA-YAGO2 dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the BART + DPR model in the paper on the KILT: AIDA-YAGO2 dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the T5-base model in the KILT: a Benchmark for Knowledge Intensive Language Tasks paper on the KILT: AIDA-YAGO2 dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the RAG model in the paper on the KILT: AIDA-YAGO2 dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the multitask model in the paper on the KILT: AIDA-YAGO2 dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the Multitask DPR + BART model in the paper on the KILT: AIDA-YAGO2 dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the chriskuei model in the paper on the KILT: AIDA-YAGO2 dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the Multi-task DPR model in the paper on the KILT: AIDA-YAGO2 dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the multi-task small model in the paper on the KILT: AIDA-YAGO2 dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the N3-Reuters-128 dataset? | Micro-F1, Micro-F1 strong |
What metrics were used to measure the E2E model in the End-to-End Neural Entity Linking paper on the N3-Reuters-128 dataset? | Micro-F1, Micro-F1 strong |
What metrics were used to measure the SpEL-large (2023) model in the SpEL: Structured Prediction for Entity Linking paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the SpEL-base (2023) model in the SpEL: Structured Prediction for Entity Linking paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the Zhang et al. (2021) model in the EntQA: Entity Linking as Question Answering paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the De Cao et al. (2021b) model in the Highly Parallel Autoregressive Entity Linking with Discriminative Correction paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the De Cao et al. (2021a) model in the Autoregressive Entity Retrieval paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the Kannan Ravi et al. (2021) model in the CHOLAN: A Modular Approach for Neural Entity Linking on Wikipedia and Wikidata paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the Kolitsas et al. (2018) model in the End-to-End Neural Entity Linking paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the Martins et al. (2019) model in the Joint Learning of Named Entity Recognition and Entity Linking paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the van Hulst et al. (2020) model in the REL: An Entity Linker Standing on the Shoulders of Giants paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the Broscheit (2019) model in the Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the Févry et al. (2020b) model in the Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the Peters et al. (2019) model in the Knowledge Enhanced Contextual Word Representations paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the Hoffart et al. (2011) model in the Robust Disambiguation of Named Entities in Text paper on the AIDA-CoNLL dataset? | Micro-F1 strong |
What metrics were used to measure the ERNIE model in the ERNIE: Enhanced Language Representation with Informative Entities paper on the FIGER dataset? | Accuracy, Macro F1, Micro F1 |
What metrics were used to measure the E2E model in the End-to-End Neural Entity Linking paper on the OKE-2015 dataset? | Micro-F1, Micro-F1 strong |
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the OKE-2015 dataset? | Micro-F1, Micro-F1 strong |
What metrics were used to measure the GENRE model in the Autoregressive Entity Retrieval paper on the KILT: WNED-WIKI dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the BLINK model in the paper on the KILT: WNED-WIKI dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the RAG model in the paper on the KILT: WNED-WIKI dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the T5-base model in the KILT: a Benchmark for Knowledge Intensive Language Tasks paper on the KILT: WNED-WIKI dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the multitask model in the paper on the KILT: WNED-WIKI dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the BART model in the paper on the KILT: WNED-WIKI dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the BART + DPR model in the paper on the KILT: WNED-WIKI dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the chriskuei model in the paper on the KILT: WNED-WIKI dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the Multi-task DPR model in the paper on the KILT: WNED-WIKI dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the multi-task small model in the paper on the KILT: WNED-WIKI dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.