prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the MLMET model in the Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model paper on the Open Entity dataset? | F1 |
What metrics were used to measure the RoBERTa-Large + NPCRF (replicated by Adaseq) model in the Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field paper on the Open Entity dataset? | F1 |
What metrics were used to measure the LRN model in the Fine-grained Entity Typing via Label Reasoning paper on the Open Entity dataset? | F1 |
What metrics were used to measure the Box4Type model in the Modeling Fine-Grained Entity Types with Box Embeddings paper on the Open Entity dataset? | F1 |
What metrics were used to measure the RoBERTa-Large (replicated by Adaseq) model in the Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field paper on the Open Entity dataset? | F1 |
What metrics were used to measure the LDET model in the Learning to Denoise Distantly-Labeled Data for Entity Typing paper on the Open Entity dataset? | F1 |
What metrics were used to measure the LabelGCN model in the Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing paper on the Open Entity dataset? | F1 |
What metrics were used to measure the UFET-biLSTM model in the Ultra-Fine Entity Typing paper on the Open Entity dataset? | F1 |
What metrics were used to measure the LITE model in the Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference paper on the OntoNotes dataset? | Macro F1, Micro F1 |
What metrics were used to measure the TS-TCC model in the Time-Series Representation Learning via Temporal and Contextual Contrasting paper on the Epilepsy seizure prediction dataset? | 1:1 Accuracy |
What metrics were used to measure the Resnet-50: 80% Sparse model in the Rigging the Lottery: Making All Tickets Winners paper on the ImageNet dataset? | Top-1 Accuracy |
What metrics were used to measure the Resnet-50: 90% Sparse model in the Rigging the Lottery: Making All Tickets Winners paper on the ImageNet dataset? | Top-1 Accuracy |
What metrics were used to measure the Resnet-50: 80% Sparse 100 epochs model in the Sparse Training via Boosting Pruning Plasticity with Neuroregeneration paper on the ImageNet dataset? | Top-1 Accuracy |
What metrics were used to measure the Resnet-50: 80% Sparse 100 epochs model in the Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training paper on the ImageNet dataset? | Top-1 Accuracy |
What metrics were used to measure the Resnet-50: 90% Sparse 100 epochs model in the Sparse Training via Boosting Pruning Plasticity with Neuroregeneration paper on the ImageNet dataset? | Top-1 Accuracy |
What metrics were used to measure the Resnet-50: 90% Sparse 100 epochs model in the Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training paper on the ImageNet dataset? | Top-1 Accuracy |
What metrics were used to measure the MobileNet-v1: 75% Sparse model in the Rigging the Lottery: Making All Tickets Winners paper on the ImageNet dataset? | Top-1 Accuracy |
What metrics were used to measure the MobileNet-v1: 90% Sparse model in the Rigging the Lottery: Making All Tickets Winners paper on the ImageNet dataset? | Top-1 Accuracy |
What metrics were used to measure the SINDy model in the Sparse learning of stochastic dynamic equations paper on the ImageNet dataset? | Top-1 Accuracy |
What metrics were used to measure the Resnet18 model in the Adaptive Neural Connections for Sparsity Learning paper on the ImageNet32 dataset? | Sparsity |
What metrics were used to measure the Resnet18 model in the Adaptive Neural Connections for Sparsity Learning paper on the CINIC-10 dataset? | Sparsity |
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the WNED-CWEB dataset? | Micro-F1 |
What metrics were used to measure the confidence-order model in the Global Entity Disambiguation with BERT paper on the WNED-CWEB dataset? | Micro-F1 |
What metrics were used to measure the KBED model in the Improving Entity Disambiguation by Reasoning over a Knowledge Base paper on the WNED-CWEB dataset? | Micro-F1 |
What metrics were used to measure the Global model in the Deep Joint Entity Disambiguation with Local Neural Attention paper on the WNED-CWEB dataset? | Micro-F1 |
What metrics were used to measure the GENRE model in the Autoregressive Entity Retrieval paper on the WNED-CWEB dataset? | Micro-F1 |
What metrics were used to measure the MEP model in the Global Entity Disambiguation with BERT paper on the WNED-CWEB dataset? | Micro-F1 |
What metrics were used to measure the NER4EL model in the Named Entity Recognition for Entity Linking: What Works and What’s Next paper on the WNED-CWEB dataset? | Micro-F1 |
What metrics were used to measure the Model F+ model in the Entity Linking in 100 Languages paper on the Mewsli-9 dataset? | Micro Precision |
What metrics were used to measure the mGENRE model in the Multilingual Autoregressive Entity Linking paper on the Mewsli-9 dataset? | Micro Precision |
What metrics were used to measure the KBED model in the Improving Entity Disambiguation by Reasoning over a Knowledge Base paper on the ShadowLink-Top dataset? | Micro-F1 |
What metrics were used to measure the DeepType model in the DeepType: Multilingual Entity Linking by Neural Type System Evolution paper on the TAC2010 dataset? | Micro Precision |
What metrics were used to measure the NTEE model in the Learning Distributed Representations of Texts and Entities from Knowledge Base paper on the TAC2010 dataset? | Micro Precision |
What metrics were used to measure the This work+CtxLSTMs+LDC+MPCM model in the Neural Cross-Lingual Entity Linking paper on the TAC2010 dataset? | Micro Precision |
What metrics were used to measure the Wikipedia2Vec model in the Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation paper on the TAC2010 dataset? | Micro Precision |
What metrics were used to measure the KBED model in the Improving Entity Disambiguation by Reasoning over a Knowledge Base paper on the ACE2004 dataset? | Micro-F1 |
What metrics were used to measure the confidence-order model in the Global Entity Disambiguation with BERT paper on the ACE2004 dataset? | Micro-F1 |
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the ACE2004 dataset? | Micro-F1 |
What metrics were used to measure the NER4EL model in the Named Entity Recognition for Entity Linking: What Works and What’s Next paper on the ACE2004 dataset? | Micro-F1 |
What metrics were used to measure the GENRE model in the Autoregressive Entity Retrieval paper on the ACE2004 dataset? | Micro-F1 |
What metrics were used to measure the Global model in the Deep Joint Entity Disambiguation with Local Neural Attention paper on the ACE2004 dataset? | Micro-F1 |
What metrics were used to measure the KBED model in the Improving Entity Disambiguation by Reasoning over a Knowledge Base paper on the ShadowLink-Shadow dataset? | Micro-F1 |
What metrics were used to measure the KBED model in the Improving Entity Disambiguation by Reasoning over a Knowledge Base paper on the WNED-WIKI dataset? | Micro-F1 |
What metrics were used to measure the confidence-order model in the Global Entity Disambiguation with BERT paper on the WNED-WIKI dataset? | Micro-F1 |
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the WNED-WIKI dataset? | Micro-F1 |
What metrics were used to measure the GENRE model in the Autoregressive Entity Retrieval paper on the WNED-WIKI dataset? | Micro-F1 |
What metrics were used to measure the MEP model in the Global Entity Disambiguation with BERT paper on the WNED-WIKI dataset? | Micro-F1 |
What metrics were used to measure the Glonal model in the Deep Joint Entity Disambiguation with Local Neural Attention paper on the WNED-WIKI dataset? | Micro-F1 |
What metrics were used to measure the NER4EL model in the Named Entity Recognition for Entity Linking: What Works and What’s Next paper on the WNED-WIKI dataset? | Micro-F1 |
What metrics were used to measure the confidence-order model in the Global Entity Disambiguation with BERT paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the DCA-SL + Triples model in the Evaluating the Impact of Knowledge Graph Context on Entity Disambiguation Models paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the DeepType model in the DeepType: Multilingual Entity Linking by Neural Type System Evolution paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the NTEE model in the Learning Distributed Representations of Texts and Entities from Knowledge Base paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the DCA-SL (2019)(et al., [2019c]) model in the Learning Dynamic Context Augmentation for Global Entity Linking paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the Fang et al. (2019) (et al., [2019e]) model in the Joint Entity Linking with Deep Reinforcement Learning paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the This work+CtxLSTMs+LDC+MPCM model in the Neural Cross-Lingual Entity Linking paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the Chen et al. (2020) (et al, 2020) model in the Improving Entity Linking by Modeling Latent Entity Type Information paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the GENRE model in the Autoregressive Entity Retrieval paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the Wikipedia2Vec-GBRT model in the Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the ELDEN model in the ELDEN: Improved Entity Linking Using Densified Knowledge Graphs paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the NER4EL model in the Named Entity Recognition for Entity Linking: What Works and What’s Next paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the Global model in the Deep Joint Entity Disambiguation with Local Neural Attention paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the Wikipedia2Vec model in the Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the KBED model in the Improving Entity Disambiguation by Reasoning over a Knowledge Base paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the Le& Titov (2019) (Le and Titov, 2019) model in the Boosting Entity Linking Performance by Leveraging Unlabeled Documents paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the Hoffart et al. model in the Robust Disambiguation of Named Entities in Text paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the Bootleg model in the Bootleg: Chasing the Tail with Self-Supervised Named Entity Disambiguation paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the BERT-Entity-Sim (local & global) AIDA-B model in the Improving Entity Linking by Modeling Latent Entity Type Information paper on the AIDA-CoNLL dataset? | In-KB Accuracy, Micro-F1 |
What metrics were used to measure the confidence-order model in the Global Entity Disambiguation with BERT paper on the AQUAINT dataset? | Micro-F1 |
What metrics were used to measure the KBED model in the Improving Entity Disambiguation by Reasoning over a Knowledge Base paper on the AQUAINT dataset? | Micro-F1 |
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the AQUAINT dataset? | Micro-F1 |
What metrics were used to measure the GENRE model in the Autoregressive Entity Retrieval paper on the AQUAINT dataset? | Micro-F1 |
What metrics were used to measure the Global model in the Deep Joint Entity Disambiguation with Local Neural Attention paper on the AQUAINT dataset? | Micro-F1 |
What metrics were used to measure the NER4EL model in the Named Entity Recognition for Entity Linking: What Works and What’s Next paper on the AQUAINT dataset? | Micro-F1 |
What metrics were used to measure the confidence-order model in the Global Entity Disambiguation with BERT paper on the MSNBC dataset? | Micro-F1 |
What metrics were used to measure the KBED model in the Improving Entity Disambiguation by Reasoning over a Knowledge Base paper on the MSNBC dataset? | Micro-F1 |
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the MSNBC dataset? | Micro-F1 |
What metrics were used to measure the GENRE model in the Autoregressive Entity Retrieval paper on the MSNBC dataset? | Micro-F1 |
What metrics were used to measure the Global model in the Deep Joint Entity Disambiguation with Local Neural Attention paper on the MSNBC dataset? | Micro-F1 |
What metrics were used to measure the NER4EL model in the Named Entity Recognition for Entity Linking: What Works and What’s Next paper on the MSNBC dataset? | Micro-F1 |
What metrics were used to measure the ModelGenesis model in the Models Genesis: Generic Autodidactic Models for 3D Medical Image Analysis paper on the PE-CAD FPRED dataset? | AUC |
What metrics were used to measure the CodeTrans-TF-Large model in the CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing paper on the CommitGen dataset? | BLEU-4 |
What metrics were used to measure the CFDN model in the Saliency Detection via Global Context Enhanced Feature Fusion and Edge Weighted Loss paper on the PASCAL-S dataset? | S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, F-Score, Weighted F-Measure |
What metrics were used to measure the InSPyReNet model in the Revisiting Image Pyramid Structure for High Resolution Salient Object Detection paper on the PASCAL-S dataset? | S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, F-Score, Weighted F-Measure |
What metrics were used to measure the M3Net-S model in the M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection paper on the PASCAL-S dataset? | S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, F-Score, Weighted F-Measure |
What metrics were used to measure the TRACER-TE7 model in the TRACER: Extreme Attention Guided Salient Object Tracing Network paper on the PASCAL-S dataset? | S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, F-Score, Weighted F-Measure |
What metrics were used to measure the InSPyReNet model in the Revisiting Image Pyramid Structure for High Resolution Salient Object Detection paper on the PASCAL-S dataset? | S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, F-Score, Weighted F-Measure |
What metrics were used to measure the M3Net-R model in the M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection paper on the PASCAL-S dataset? | S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, F-Score, Weighted F-Measure |
What metrics were used to measure the HybridSOD model in the A Weakly Supervised Learning Framework for Salient Object Detection via Hybrid Labels paper on the PASCAL-S dataset? | S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, F-Score, Weighted F-Measure |
What metrics were used to measure the PoolNet (VGG-16) model in the A Simple Pooling-Based Design for Real-Time Salient Object Detection paper on the PASCAL-S dataset? | S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, F-Score, Weighted F-Measure |
What metrics were used to measure the DSS (Res2Net-50) model in the Res2Net: A New Multi-scale Backbone Architecture paper on the PASCAL-S dataset? | S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, F-Score, Weighted F-Measure |
What metrics were used to measure the CPD-R (ResNet50) model in the Cascaded Partial Decoder for Fast and Accurate Salient Object Detection paper on the PASCAL-S dataset? | S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, F-Score, Weighted F-Measure |
What metrics were used to measure the C4Net model in the C$^{4}$Net: Contextual Compression and Complementary Combination Network for Salient Object Detection paper on the PASCAL-S dataset? | S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, F-Score, Weighted F-Measure |
What metrics were used to measure the BMPM model in the A Bi-Directional Message Passing Model for Salient Object Detection paper on the PASCAL-S dataset? | S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, F-Score, Weighted F-Measure |
What metrics were used to measure the BASNet model in the BASNet: Boundary-Aware Salient Object Detection paper on the PASCAL-S dataset? | S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, F-Score, Weighted F-Measure |
What metrics were used to measure the CPD model in the Cascaded Partial Decoder for Fast and Accurate Salient Object Detection paper on the ISTD dataset? | Balanced Error Rate |
What metrics were used to measure the BMPM model in the A Bi-Directional Message Passing Model for Salient Object Detection paper on the ISTD dataset? | Balanced Error Rate |
What metrics were used to measure the JDR model in the Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal paper on the ISTD dataset? | Balanced Error Rate |
What metrics were used to measure the NLDF model in the Non-Local Deep Features for Salient Object Detection paper on the ISTD dataset? | Balanced Error Rate |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.