prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the PDUN model in the Probabilistic framework for solving Visual Dialog paper on the Visual Dialog v0.9 dataset?
1 in 10 R@5, Recall@10
What metrics were used to measure the KEF model in the Word Sense Disambiguation: A comprehensive knowledge exploitation framework paper on the Knowledge-based: dataset?
All, Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the WSD-TM model in the Knowledge-based Word Sense Disambiguation using Topic Models paper on the Knowledge-based: dataset?
All, Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the Babelfy model in the Entity Linking meets Word Sense Disambiguation: a Unified Approach paper on the Knowledge-based: dataset?
All, Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the WN 1st sense baseline model in the Word Sense Disambiguation: A Unified Evaluation Framework and Empirical Comparison paper on the Knowledge-based: dataset?
All, Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the UKBppr_w2w-nf model in the Random Walks for Knowledge-Based Word Sense Disambiguation paper on the Knowledge-based: dataset?
All, Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the UKBppr_w2w model in the Random Walks for Knowledge-Based Word Sense Disambiguation paper on the Knowledge-based: dataset?
All, Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Anachronisms) dataset?
Accuracy
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Anachronisms) dataset?
Accuracy
What metrics were used to measure the OPT 175B model in the Galactica: A Large Language Model for Science paper on the BIG-bench (Anachronisms) dataset?
Accuracy
What metrics were used to measure the GAL 120B (few-shot, k=5) model in the Galactica: A Large Language Model for Science paper on the BIG-bench (Anachronisms) dataset?
Accuracy
What metrics were used to measure the GAL 30B (few-shot, k=5) model in the Galactica: A Large Language Model for Science paper on the BIG-bench (Anachronisms) dataset?
Accuracy
What metrics were used to measure the BLOOM 176B model in the Galactica: A Large Language Model for Science paper on the BIG-bench (Anachronisms) dataset?
Accuracy
What metrics were used to measure the SPIN model in the Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement paper on the TS50 dataset?
Sequence Recovery %(All)
What metrics were used to measure the SemCor+WNGC, hypernyms model in the Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation paper on the SensEval 2 dataset?
F1
What metrics were used to measure the SemCor+WNGT, vocabulary reduced, ensemble model in the Improving the Coverage and the Generalization Ability of Neural Word Sense Disambiguation through Hypernymy and Hyponymy Relationships paper on the SensEval 2 dataset?
F1
What metrics were used to measure the LSTMLP (T:OMSTI, U:1K) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SensEval 2 dataset?
F1
What metrics were used to measure the LSTMLP (T:SemCor, U:OMSTI) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SensEval 2 dataset?
F1
What metrics were used to measure the LSTMLP (T:SemCor, U:1K) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SensEval 2 dataset?
F1
What metrics were used to measure the LSTM (T:SemCor) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SensEval 2 dataset?
F1
What metrics were used to measure the GASext (Linear) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SensEval 2 dataset?
F1
What metrics were used to measure the LSTM (T:OMSTI) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SensEval 2 dataset?
F1
What metrics were used to measure the GASext (Concatenation) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SensEval 2 dataset?
F1
What metrics were used to measure the GAS (Concatenation) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SensEval 2 dataset?
F1
What metrics were used to measure the GAS (Linear) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SensEval 2 dataset?
F1
What metrics were used to measure the SemCor+WNGC, hypernyms model in the Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation paper on the SemEval 2007 Task 17 dataset?
F1
What metrics were used to measure the SemCor+WNGT, vocabulary reduced, ensemble model in the Improving the Coverage and the Generalization Ability of Neural Word Sense Disambiguation through Hypernymy and Hyponymy Relationships paper on the SemEval 2007 Task 17 dataset?
F1
What metrics were used to measure the LSTM (T:SemCor) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2007 Task 17 dataset?
F1
What metrics were used to measure the LSTMLP (T:SemCor, U:OMSTI) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2007 Task 17 dataset?
F1
What metrics were used to measure the LSTMLP (T:SemCor, U:1K) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2007 Task 17 dataset?
F1
What metrics were used to measure the LSTMLP (T:OMSTI, U:1K) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2007 Task 17 dataset?
F1
What metrics were used to measure the kNN-BERT + POS (training corpus: SemCor) model in the Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings paper on the SemEval 2007 Task 17 dataset?
F1
What metrics were used to measure the kNN-BERT model in the Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings paper on the SemEval 2007 Task 17 dataset?
F1
What metrics were used to measure the LSTM (T:OMSTI) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2007 Task 17 dataset?
F1
What metrics were used to measure the SemCor+WNGC, hypernyms model in the Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation paper on the SemEval 2013 Task 12 dataset?
F1, Unsupervised
What metrics were used to measure the SemCor+WNGT, vocabulary reduced, ensemble model in the Improving the Coverage and the Generalization Ability of Neural Word Sense Disambiguation through Hypernymy and Hyponymy Relationships paper on the SemEval 2013 Task 12 dataset?
F1, Unsupervised
What metrics were used to measure the LSTMLP (T:SemCor, U:1K) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2013 Task 12 dataset?
F1, Unsupervised
What metrics were used to measure the LSTMLP (T:OMSTI, U:1K) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2013 Task 12 dataset?
F1, Unsupervised
What metrics were used to measure the LSTMLP (T:SemCor, U:OMSTI) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2013 Task 12 dataset?
F1, Unsupervised
What metrics were used to measure the LSTM (T:OMSTI) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2013 Task 12 dataset?
F1, Unsupervised
What metrics were used to measure the GASext (Concatenation) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SemEval 2013 Task 12 dataset?
F1, Unsupervised
What metrics were used to measure the GASext (Linear) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SemEval 2013 Task 12 dataset?
F1, Unsupervised
What metrics were used to measure the GAS (Concatenation) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SemEval 2013 Task 12 dataset?
F1, Unsupervised
What metrics were used to measure the LSTM (T:SemCor) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2013 Task 12 dataset?
F1, Unsupervised
What metrics were used to measure the GAS (Linear) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SemEval 2013 Task 12 dataset?
F1, Unsupervised
What metrics were used to measure the ShotgunWSD 2.0 model in the ShotgunWSD: An unsupervised algorithm for global word sense disambiguation inspired by DNA sequencing paper on the SemEval 2013 Task 12 dataset?
F1, Unsupervised
What metrics were used to measure the SemCor+WNGC, hypernyms model in the Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation paper on the SensEval 3 Task 1 dataset?
F1
What metrics were used to measure the LSTMLP (T:SemCor, U:1K) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SensEval 3 Task 1 dataset?
F1
What metrics were used to measure the LSTMLP (T:SemCor, U:OMSTI) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SensEval 3 Task 1 dataset?
F1
What metrics were used to measure the LSTMLP (T:OMSTI, U:1K) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SensEval 3 Task 1 dataset?
F1
What metrics were used to measure the GASext (Concatenation) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SensEval 3 Task 1 dataset?
F1
What metrics were used to measure the GAS (Concatenation) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SensEval 3 Task 1 dataset?
F1
What metrics were used to measure the SemCor+WNGT, vocabulary reduced, ensemble model in the Improving the Coverage and the Generalization Ability of Neural Word Sense Disambiguation through Hypernymy and Hyponymy Relationships paper on the SensEval 3 Task 1 dataset?
F1
What metrics were used to measure the GASext (Linear) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SensEval 3 Task 1 dataset?
F1
What metrics were used to measure the GAS (Linear) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SensEval 3 Task 1 dataset?
F1
What metrics were used to measure the LSTM (T:SemCor) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SensEval 3 Task 1 dataset?
F1
What metrics were used to measure the LSTM (T:OMSTI) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SensEval 3 Task 1 dataset?
F1
What metrics were used to measure the kNN-BERT model in the Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings paper on the SensEval 2 Lexical Sample dataset?
F1
What metrics were used to measure the BiLSTM with GloVe model in the Word Sense Disambiguation using a Bidirectional LSTM paper on the SensEval 2 Lexical Sample dataset?
F1
What metrics were used to measure the IMS + adapted CW model in the Semi-Supervised Word Sense Disambiguation Using Word Embeddings in General and Specific Domains paper on the SensEval 2 Lexical Sample dataset?
F1
What metrics were used to measure the kNN-BERT model in the Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings paper on the SensEval 3 Lexical Sample dataset?
F1
What metrics were used to measure the BiLSTM with GloVe model in the Word Sense Disambiguation using a Bidirectional LSTM paper on the SensEval 3 Lexical Sample dataset?
F1
What metrics were used to measure the IMS + adapted CW model in the Semi-Supervised Word Sense Disambiguation Using Word Embeddings in General and Specific Domains paper on the SensEval 3 Lexical Sample dataset?
F1
What metrics were used to measure the Single BiLSTM model in the One Single Deep Bidirectional LSTM Network for Word Sense Disambiguation of Text Data paper on the SensEval 3 Lexical Sample dataset?
F1
What metrics were used to measure the ConSeC+WNGC model in the ConSeC: Word Sense Disambiguation as Continuous Sense Comprehension paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the ESR+WNGC model in the Improved Word Sense Disambiguation with Enhanced Sense Representations paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the ConSeC model in the ConSeC: Word Sense Disambiguation as Continuous Sense Comprehension paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the ESCHER SemCor model in the ESC: Redesigning WSD with Extractive Sense Comprehension paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the ESR model in the Improved Word Sense Disambiguation with Enhanced Sense Representations paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the EWISER+WNGC model in the Breaking Through the 80\% Glass Ceiling: Raising the State of the Art in Word Sense Disambiguation by Incorporating Knowledge Graph Information paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the SemCor+WNGC, hypernyms model in the Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the SparseLMMS+WNGC model in the Sparsity Makes Sense: Word Sense Disambiguation Using Sparse Contextualized Word Representations paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the BEM model in the Moving Down the Long Tail of Word Sense Disambiguation with Gloss Informed Bi-encoders paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the EWISER model in the Breaking Through the 80\% Glass Ceiling: Raising the State of the Art in Word Sense Disambiguation by Incorporating Knowledge Graph Information paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the ARES model in the With More Contexts Comes Better Performance: Contextualized Sense Embeddings for All-Round Word Sense Disambiguation paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the SparseLMMS model in the Sparsity Makes Sense: Word Sense Disambiguation Using Sparse Contextualized Word Representations paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the GlossBERT model in the GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the BERT (linear projection) model in the Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word Representations paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the SemCor+WNGT, vocabulary reduced, ensemble model in the Improving the Coverage and the Generalization Ability of Neural Word Sense Disambiguation through Hypernymy and Hyponymy Relationships paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the BERT (nearest neighbour) model in the Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word Representations paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the supWSD<sub>emb</sub> model in the paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the GAS<sub>ext</sub> model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the GAS model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the Bi-LSTM<sub>att+LEX</sub> model in the Neural Sequence Learning Models for Word Sense Disambiguation paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the Bi-LSTM<sub>att+LEX+POS</sub> model in the Neural Sequence Learning Models for Word Sense Disambiguation paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the context2vec model in the paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the ELMo model in the Deep contextualized word representations paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the supWSD model in the paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the MFS baseline model in the paper on the Supervised: dataset?
Senseval 2, Senseval 3, SemEval 2007, SemEval 2013, SemEval 2015
What metrics were used to measure the transformers model in the paper on the WiC-TSV dataset?
Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific
What metrics were used to measure the CTLR model in the paper on the WiC-TSV dataset?
Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific
What metrics were used to measure the GlossBert-ws model in the GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge paper on the WiC-TSV dataset?
Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific
What metrics were used to measure the Bert-base model in the WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context paper on the WiC-TSV dataset?
Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific
What metrics were used to measure the Unsupervised Bert model in the WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context paper on the WiC-TSV dataset?
Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific
What metrics were used to measure the FastText model in the WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context paper on the WiC-TSV dataset?
Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific
What metrics were used to measure the All true model in the WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context paper on the WiC-TSV dataset?
Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific
What metrics were used to measure the Human model in the WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context paper on the WiC-TSV dataset?
Task 1 Accuracy: all, Task 1 Accuracy: general purpose, Task 1 Accuracy: domain specific, Task 2 Accuracy: all, Task 2 Accuracy: general purpose, Task 2 Accuracy: domain specific, Task 3 Accuracy: all, Task 3 Accuracy: general purpose, Task 3 Accuracy: domain specific
What metrics were used to measure the SemCor+WNGC, hypernyms model in the Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation paper on the SemEval 2015 Task 13 dataset?
F1
What metrics were used to measure the SemCor+WNGT, vocabulary reduced, ensemble model in the Improving the Coverage and the Generalization Ability of Neural Word Sense Disambiguation through Hypernymy and Hyponymy Relationships paper on the SemEval 2015 Task 13 dataset?
F1
What metrics were used to measure the GASext (Concatenation) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SemEval 2015 Task 13 dataset?
F1