prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the GASext (Linear) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SemEval 2015 Task 13 dataset?
F1
What metrics were used to measure the GAS (Concatenation) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SemEval 2015 Task 13 dataset?
F1
What metrics were used to measure the GAS (Linear) model in the Incorporating Glosses into Neural Word Sense Disambiguation paper on the SemEval 2015 Task 13 dataset?
F1
What metrics were used to measure the SemCor+WNGC, hypernyms model in the Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation paper on the SemEval 2007 Task 7 dataset?
F1, Unsupervised
What metrics were used to measure the SemCor+WNGT, vocabulary reduced, ensemble model in the Improving the Coverage and the Generalization Ability of Neural Word Sense Disambiguation through Hypernymy and Hyponymy Relationships paper on the SemEval 2007 Task 7 dataset?
F1, Unsupervised
What metrics were used to measure the kNN-BERT + POS (training corpus: WNGT) model in the Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings paper on the SemEval 2007 Task 7 dataset?
F1, Unsupervised
What metrics were used to measure the LSTMLP (T:SemCor, U:OMSTI) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2007 Task 7 dataset?
F1, Unsupervised
What metrics were used to measure the LSTMLP (T:SemCor, U:1K) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2007 Task 7 dataset?
F1, Unsupervised
What metrics were used to measure the LSTMLP (T:OMSTI, U:1K) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2007 Task 7 dataset?
F1, Unsupervised
What metrics were used to measure the LSTM (T:SemCor) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2007 Task 7 dataset?
F1, Unsupervised
What metrics were used to measure the ShotgunWSD 2.0 model in the ShotgunWSD: An unsupervised algorithm for global word sense disambiguation inspired by DNA sequencing paper on the SemEval 2007 Task 7 dataset?
F1, Unsupervised
What metrics were used to measure the kNN-BERT model in the Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings paper on the SemEval 2007 Task 7 dataset?
F1, Unsupervised
What metrics were used to measure the LSTM (T:OMSTI) model in the Semi-supervised Word Sense Disambiguation with Neural Models paper on the SemEval 2007 Task 7 dataset?
F1, Unsupervised
What metrics were used to measure the Human Benchmark model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the ruT5-large-finetune model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the RuBERT conversational model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the RuBERT plain model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the ruRoberta-large finetune model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the ruBert-base finetune model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the Multilingual Bert model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the ruT5-base-finetune model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the ruBert-large finetune model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the SBERT_Large_mt_ru_finetuning model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the SBERT_Large model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the RuGPT3Large model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the RuGPT3Medium model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the MT5 Large model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the heuristic majority model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the Golden Transformer model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the YaLM 1.0B few-shot model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the majority_class model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the RuGPT3Small model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the Baseline TF-IDF1.1 model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the RuGPT3XL few-shot model in the paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the Random weighted model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the RUSSE dataset?
Accuracy
What metrics were used to measure the COSINE + Transductive Learning model in the Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach paper on the Words in Context dataset?
Accuracy
What metrics were used to measure the PaLM 540B (finetuned) model in the PaLM: Scaling Language Modeling with Pathways paper on the Words in Context dataset?
Accuracy
What metrics were used to measure the DeBERTa-Ensemble model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the Words in Context dataset?
Accuracy
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the Words in Context dataset?
Accuracy
What metrics were used to measure the DeBERTa-1.5B model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the Words in Context dataset?
Accuracy
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the Words in Context dataset?
Accuracy
What metrics were used to measure the N-Grammer model in the N-Grammer: Augmenting Transformers with latent n-grams paper on the Words in Context dataset?
Accuracy
What metrics were used to measure the AlexaTM 20B model in the AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model paper on the Words in Context dataset?
Accuracy
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the Words in Context dataset?
Accuracy
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the Words in Context dataset?
Accuracy
What metrics were used to measure the GPT-3 175B (Few-Shot) model in the Language Models are Few-Shot Learners paper on the Words in Context dataset?
Accuracy
What metrics were used to measure the BERT+DP model in the Towards better substitution-based word sense induction paper on the SemEval 2013 dataset?
F-BC, F_NMI, AVG
What metrics were used to measure the AutoSense model in the AutoSense Model for Word Sense Induction paper on the SemEval 2013 dataset?
F-BC, F_NMI, AVG
What metrics were used to measure the LSDP model in the Word Sense Induction with Neural biLM and Symmetric Patterns paper on the SemEval 2013 dataset?
F-BC, F_NMI, AVG
What metrics were used to measure the MCC-S model in the Structured Generative Models of Continuous Features for Word Sense Induction paper on the SemEval 2013 dataset?
F-BC, F_NMI, AVG
What metrics were used to measure the STM+w2v model in the A Sense-Topic Model for Word Sense Induction with Unsupervised Data Enrichment paper on the SemEval 2013 dataset?
F-BC, F_NMI, AVG
What metrics were used to measure the AI-KU model in the AI-KU: Using Substitute Vectors and Co-Occurrence Modeling For Word Sense Induction and Disambiguation paper on the SemEval 2013 dataset?
F-BC, F_NMI, AVG
What metrics were used to measure the BERT+DP model in the Towards better substitution-based word sense induction paper on the SemEval 2010 WSI dataset?
F-Score, V-Measure, AVG
What metrics were used to measure the AutoSense model in the AutoSense Model for Word Sense Induction paper on the SemEval 2010 WSI dataset?
F-Score, V-Measure, AVG
What metrics were used to measure the LDA model in the Unsupervised Word Sense Induction using Distributional Statistics paper on the SemEval 2010 WSI dataset?
F-Score, V-Measure, AVG
What metrics were used to measure the SE-WSI-fix model in the Sense Embedding Learning for Word Sense Induction paper on the SemEval 2010 WSI dataset?
F-Score, V-Measure, AVG
What metrics were used to measure the BNP-HC model in the Inducing Word Sense with Automatically Learned Hidden Concepts paper on the SemEval 2010 WSI dataset?
F-Score, V-Measure, AVG
What metrics were used to measure the Random Forest model in the Sleep quality prediction in caregivers using physiological signals paper on the 100 sleep nights of 8 caregivers dataset?
Accuracy
What metrics were used to measure the ALBERT model in the End-to-end Spoken Conversational Question Answering: Task, Dataset and Model paper on the Spoken-SQuAD dataset?
F1 score
What metrics were used to measure the SpeechBERT model in the SpeechBERT: An Audio-and-text Jointly Learned Language Model for End-to-end Spoken Question Answering paper on the Spoken-SQuAD dataset?
F1 score
What metrics were used to measure the QANet + GAN model in the Mitigating the Impact of Speech Recognition Errors on Spoken Question Answering by Adversarial Domain Adaptation paper on the Spoken-SQuAD dataset?
F1 score
What metrics were used to measure the Baseline model in the Spoken SQuAD: A Study of Mitigating the Impact of Speech Recognition Errors on Listening Comprehension paper on the Spoken-SQuAD dataset?
F1 score
What metrics were used to measure the Finstreder (Conformer, character-based) model in the Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models paper on the Snips-SmartLights dataset?
Accuracy (%)
What metrics were used to measure the Finstreder (Conformer) model in the Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models paper on the Snips-SmartLights dataset?
Accuracy (%)
What metrics were used to measure the AT-AT model in the Exploring Transfer Learning For End-to-End Spoken Language Understanding paper on the Snips-SmartLights dataset?
Accuracy (%)
What metrics were used to measure the Finstreder (Quartznet) model in the Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models paper on the Snips-SmartLights dataset?
Accuracy (%)
What metrics were used to measure the Snips model in the Spoken Language Understanding on the Edge paper on the Snips-SmartLights dataset?
Accuracy (%)
What metrics were used to measure the Google model in the Spoken Language Understanding on the Edge paper on the Snips-SmartLights dataset?
Accuracy (%)
What metrics were used to measure the Real + synthetic model in the Using Speech Synthesis to Train End-to-End Spoken Language Understanding Models paper on the Snips-SmartLights dataset?
Accuracy (%)
What metrics were used to measure the Finstreder (Conformer) model in the Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models paper on the Timers and Such dataset?
Accuracy (%)
What metrics were used to measure the Finstreder (Quartznet) model in the Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models paper on the Timers and Such dataset?
Accuracy (%)
What metrics were used to measure the Baseline model in the Timers and Such: A Practical Benchmark for Spoken Language Understanding with Numbers paper on the Timers and Such dataset?
Accuracy (%)
What metrics were used to measure the Finstreder (Conformer, character-based) model in the Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models paper on the Snips-SmartSpeaker dataset?
Accuracy-EN (%), Accuracy-FR (%)
What metrics were used to measure the Finstreder (Conformer) model in the Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models paper on the Snips-SmartSpeaker dataset?
Accuracy-EN (%), Accuracy-FR (%)
What metrics were used to measure the Finstreder (Quartznet) model in the Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models paper on the Snips-SmartSpeaker dataset?
Accuracy-EN (%), Accuracy-FR (%)
What metrics were used to measure the Snips model in the Spoken Language Understanding on the Edge paper on the Snips-SmartSpeaker dataset?
Accuracy-EN (%), Accuracy-FR (%)
What metrics were used to measure the Google model in the Spoken Language Understanding on the Edge paper on the Snips-SmartSpeaker dataset?
Accuracy-EN (%), Accuracy-FR (%)
What metrics were used to measure the Finstreder (Conformer + AMT, character-based) model in the Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the Finstreder (Quartznet + AMT) model in the Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the textual-kd-slu model in the Two-stage Textual Knowledge Distillation for End-to-End Spoken Language Understanding paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the Wav2Vec2.0-Classifier model in the Integration of Pre-trained Networks with Continuous Token Interface for End-to-End Spoken Language Understanding paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the E2E SLP two-step model in the Speech-language Pre-training for End-to-end Spoken Language Understanding paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the Wav2vec 2.0 SSL model in the Do We Still Need Automatic Speech Recognition for Spoken Language Understanding? paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the Finstreder (Conformer) model in the Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the AT-AT model in the Exploring Transfer Learning For End-to-End Spoken Language Understanding paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the BERT, AC Pretraining model in the End-to-End Spoken Language Understanding for Generalized Voice Assistants paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the 3D-CNN+LSTM+CE model in the Sequential End-to-End Intent and Slot Label Classification and Localization paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the Finstreder (Quartznet) model in the Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the Reptile model in the Improving End-to-End Speech-to-Intent Classification with Reptile paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the FANS model in the FANS: Fusing ASR and NLU for on-device SLU paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the Pooling classifier pre-trained using force-aligned phoneme and word labels on LibriSpeech model in the Speech Model Pre-training for End-to-End Spoken Language Understanding paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the Amazon Alexa model in the Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the pGSLM+ model in the SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks paper on the Fluent Speech Commands dataset?
Accuracy (%)
What metrics were used to measure the ERNIE 2.0 Large model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the XNLI Chinese Dev dataset?
Accuracy
What metrics were used to measure the ERNIE 2.0 Base model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the XNLI Chinese Dev dataset?
Accuracy
What metrics were used to measure the ERNIE model in the ERNIE: Enhanced Representation through Knowledge Integration paper on the XNLI Chinese Dev dataset?
Accuracy
What metrics were used to measure the SciFive-large model in the SciFive: a text-to-text transformer model for biomedical literature paper on the MedNLI dataset?
Accuracy, F1
What metrics were used to measure the BioELECTRA-Base model in the BioELECTRA:Pretrained Biomedical text Encoder using Discriminators paper on the MedNLI dataset?
Accuracy, F1
What metrics were used to measure the CharacterBERT (base, medical) model in the CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters paper on the MedNLI dataset?
Accuracy, F1
What metrics were used to measure the BiomedGPT-B model in the BiomedGPT: A Unified and Generalist Biomedical Generative Pre-trained Transformer for Vision, Language, and Multimodal Tasks paper on the MedNLI dataset?
Accuracy, F1