prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the LSTM model in the TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification paper on the TweetEval dataset? | Emoji, Emotion, Hate, Irony, Offensive, Sentiment, Stance, ALL, Accuracy, Macro F1, Weighted F1, f1 |
What metrics were used to measure the RoBERTa-wwm-ext-large model in the Pre-Training with Whole Word Masking for Chinese BERT paper on the ChnSentiCorp dataset? | F1 |
What metrics were used to measure the InstructABSA model in the InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis paper on the SemEval 2014 Task 4 Subtask 1+2 dataset? | F1 |
What metrics were used to measure the GRACE model in the GRACE: Gradient Harmonized and Cascaded Labeling for Aspect-based Sentiment Analysis paper on the SemEval 2014 Task 4 Subtask 1+2 dataset? | F1 |
What metrics were used to measure the SPAN model in the Open-Domain Targeted Sentiment Analysis via Span-Based Extraction and Classification paper on the SemEval 2014 Task 4 Subtask 1+2 dataset? | F1 |
What metrics were used to measure the RACL-BERT model in the Relation-Aware Collaborative Learning for Unified Aspect-Based Sentiment Analysis paper on the SemEval 2014 Task 4 Subtask 1+2 dataset? | F1 |
What metrics were used to measure the BERT-E2E-ABSA model in the Exploiting BERT for End-to-End Aspect-based Sentiment Analysis paper on the SemEval 2014 Task 4 Subtask 1+2 dataset? | F1 |
What metrics were used to measure the DOER model in the DOER: Dual Cross-Shared RNN for Aspect Term-Polarity Co-Extraction paper on the SemEval 2014 Task 4 Subtask 1+2 dataset? | F1 |
What metrics were used to measure the IMN model in the An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis paper on the SemEval 2014 Task 4 Subtask 1+2 dataset? | F1 |
What metrics were used to measure the E2E-TBSA model in the A Unified Model for Opinion Target Extraction and Target Sentiment Prediction paper on the SemEval 2014 Task 4 Subtask 1+2 dataset? | F1 |
What metrics were used to measure the CNN-LSTM model in the Mazajak: An Online Arabic Sentiment Analyser paper on the ASTD dataset? | Average Recall |
What metrics were used to measure the lstm+bert model in the Deep Neural Networks for Bot Detection paper on the 122 People - Passenger Behavior Recognition Data dataset? | 1:3 Accuracy |
What metrics were used to measure the XLNet model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the BERT_large+ITPT model in the How to Fine-Tune BERT for Text Classification? paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the LHTR model in the Heavy-tailed Representations, Text Polarity Classification & Data Augmentation paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the BERT large model in the Unsupervised Data Augmentation for Consistency Training paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the BERT_base+ITPT model in the How to Fine-Tune BERT for Text Classification? paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the BERT large finetune UDA model in the Unsupervised Data Augmentation for Consistency Training paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the ULMFiT model in the Universal Language Model Fine-tuning for Text Classification paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the DPCNN model in the Deep Pyramid Convolutional Neural Networks for Text Categorization paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the DRNN model in the Disconnected Recurrent Neural Networks for Text Categorization paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the CNN model in the Supervised and Semi-Supervised Text Categorization using LSTM for Region Embeddings paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the Block-sparse LSTM model in the GPU Kernels for Block-Sparse Weights paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the CCCapsNet model in the Compositional Coding Capsule Network with K-Means Routing for Text Classification paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the DNC+CUW model in the Learning to Remember More with Less Memorization paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the M-ACNN model in the Learning Context-Sensitive Convolutional Filters for Text Processing paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the SRNN model in the Sliced Recurrent Neural Networks paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the SWEM-hier model in the Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the fastText, h=10, bigram model in the Bag of Tricks for Efficient Text Classification paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the LEAM model in the Joint Embedding of Words and Labels for Text Classification paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the SVDCNN model in the Squeezed Very Deep Convolutional Neural Networks for Text Classification paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the Char-level CNN model in the Character-level Convolutional Networks for Text Classification paper on the Yelp Binary classification dataset? | Error |
What metrics were used to measure the W2V2-L-LL60K (pipeline approach, uses LM) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | Recall (%) , F1 (%), Text model |
What metrics were used to measure the W2V2-L-LL60K (pipeline approach) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | Recall (%) , F1 (%), Text model |
What metrics were used to measure the W2V2-B-LS960 (pipeline approach, uses LM) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | Recall (%) , F1 (%), Text model |
What metrics were used to measure the W2V2-B-LS960 (pipeline approach) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | Recall (%) , F1 (%), Text model |
What metrics were used to measure the W2V2-L-LL60K (e2e approach) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | Recall (%) , F1 (%), Text model |
What metrics were used to measure the HuBERT-B-LS960 (e2e approach) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | Recall (%) , F1 (%), Text model |
What metrics were used to measure the W2V2-B-LS960 (e2e approach) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | Recall (%) , F1 (%), Text model |
What metrics were used to measure the W2V2-B-VP100K (e2e approach) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | Recall (%) , F1 (%), Text model |
What metrics were used to measure the AraBERTv1 model in the AraBERT: Transformer-based Model for Arabic Language Understanding paper on the HARD dataset? | Accuracy |
What metrics were used to measure the CNN-LSTM model in the Mazajak: An Online Arabic Sentiment Analyser paper on the ArSAS dataset? | Average Recall |
What metrics were used to measure the RobBERT v2 model in the RobBERT: a Dutch RoBERTa-based Language Model paper on the DBRD dataset? | Accuracy, F1 |
What metrics were used to measure the RobBERT model in the RobBERT: a Dutch RoBERTa-based Language Model paper on the DBRD dataset? | Accuracy, F1 |
What metrics were used to measure the BERTje model in the BERTje: A Dutch BERT Model paper on the DBRD dataset? | Accuracy, F1 |
What metrics were used to measure the BERT-NL model in the paper on the DBRD dataset? | Accuracy, F1 |
What metrics were used to measure the AraBERTv1 model in the AraBERT: Transformer-based Model for Arabic Language Understanding paper on the AJGT dataset? | Accuracy |
What metrics were used to measure the RuBERT-RuSentiment model in the Deep Transfer Learning Baselines for Sentiment Analysis in Russian paper on the RuSentiment dataset? | Weighted F1 |
What metrics were used to measure the NNC+VK model in the RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian paper on the RuSentiment dataset? | Weighted F1 |
What metrics were used to measure the RuBERT model in the Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language paper on the RuSentiment dataset? | Weighted F1 |
What metrics were used to measure the AnglE-LLaMA-7B model in the AnglE-optimized Text Embeddings paper on the CR dataset? | Accuracy |
What metrics were used to measure the LM-CPPF RoBERTa-base model in the LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning paper on the CR dataset? | Accuracy |
What metrics were used to measure the EFL model in the Entailment as Few-Shot Learner paper on the CR dataset? | Accuracy |
What metrics were used to measure the Block-sparse LSTM model in the GPU Kernels for Block-Sparse Weights paper on the CR dataset? | Accuracy |
What metrics were used to measure the byte mLSTM7 model in the A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors paper on the CR dataset? | Accuracy |
What metrics were used to measure the USE_T+CNN (w2v w.e.) model in the Universal Sentence Encoder paper on the CR dataset? | Accuracy |
What metrics were used to measure the SuBiLSTM-Tied model in the Improved Sentence Modeling using Suffix Bidirectional LSTM paper on the CR dataset? | Accuracy |
What metrics were used to measure the Capsule-B model in the Investigating Capsule Networks with Dynamic Routing for Text Classification paper on the CR dataset? | Accuracy |
What metrics were used to measure the STM+TSED+PT+2L model in the The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning paper on the CR dataset? | Accuracy |
What metrics were used to measure the BERT large model in the Unsupervised Data Augmentation for Consistency Training paper on the Amazon Review Full dataset? | Accuracy |
What metrics were used to measure the DPCNN model in the Deep Pyramid Convolutional Neural Networks for Text Categorization paper on the Amazon Review Full dataset? | Accuracy |
What metrics were used to measure the DRNN model in the Disconnected Recurrent Neural Networks for Text Categorization paper on the Amazon Review Full dataset? | Accuracy |
What metrics were used to measure the BERT large finetune UDA model in the Unsupervised Data Augmentation for Consistency Training paper on the Amazon Review Full dataset? | Accuracy |
What metrics were used to measure the EXAM model in the Explicit Interaction Model towards Text Classification paper on the Amazon Review Full dataset? | Accuracy |
What metrics were used to measure the SRNN model in the Sliced Recurrent Neural Networks paper on the Amazon Review Full dataset? | Accuracy |
What metrics were used to measure the CCCapsNet model in the Compositional Coding Capsule Network with K-Means Routing for Text Classification paper on the Amazon Review Full dataset? | Accuracy |
What metrics were used to measure the FastText model in the Bag of Tricks for Efficient Text Classification paper on the Amazon Review Full dataset? | Accuracy |
What metrics were used to measure the Gumbel+bi-leaf-RNN model in the On Tree-Based Neural Sentence Modeling paper on the Amazon Review Full dataset? | Accuracy |
What metrics were used to measure the LSTMs+CNNs ensemble with multiple conv. ops model in the BB_twtr at SemEval-2017 Task 4: Twitter Sentiment Analysis with CNNs and LSTMs paper on the SemEval dataset? | F1-score |
What metrics were used to measure the Deep Bi-LSTM+attention model in the DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis paper on the SemEval dataset? | F1-score |
What metrics were used to measure the RoBERTa-wwm-ext-large model in the Pre-Training with Whole Word Masking for Chinese BERT paper on the ChnSentiCorp Dev dataset? | F1 |
What metrics were used to measure the EFL model in the Entailment as Few-Shot Learner paper on the MPQA dataset? | Accuracy |
What metrics were used to measure the STM+TSED+PT+2L model in the The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning paper on the MPQA dataset? | Accuracy |
What metrics were used to measure the byte mLSTM7 model in the A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors paper on the MPQA dataset? | Accuracy |
What metrics were used to measure the USE_T+DAN (w2v w.e.) model in the Universal Sentence Encoder paper on the MPQA dataset? | Accuracy |
What metrics were used to measure the AraBERTv1 model in the AraBERT: Transformer-based Model for Arabic Language Understanding paper on the LABR (2-class, unbalanced) dataset? | Accuracy |
What metrics were used to measure the Random model in the Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm paper on the 1B Words dataset? | 1 in 10 R@1 |
What metrics were used to measure the Heinsen Routing + RoBERTa Large model in the An Algorithm for Routing Vectors in Sequences paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the RoBERTa-large+Self-Explaining model in the Self-Explaining Structures Improve NLP Models paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the Heinsen Routing + GPT-2 model in the An Algorithm for Routing Capsules in All Domains paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the BCN+Suffix BiLSTM-Tied+CoVe model in the Improved Sentence Modeling using Suffix Bidirectional LSTM paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the BERT Large model in the Fine-grained Sentiment Classification using BERT paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the LM-CPPF RoBERTa-base model in the LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the BCN+ELMo model in the Deep contextualized word representations paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the byte mLSTM7 model in the A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the BCN+Char+CoVe model in the Learned in Translation: Contextualized Word Vectors paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the Bi-CAS-LSTM model in the Cell-aware Stacked LSTMs for Modeling Sentences paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the CNN-RNF-LSTM model in the Convolutional Neural Networks with Recurrent Neural Filters paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the BERT Base model in the Fine-grained Sentiment Classification using BERT paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the Star-Transformer model in the Star-Transformer paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the BP-Transformer + GloVe model in the BP-Transformer: Modelling Long-Range Context via Binary Partitioning paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the MEAN model in the A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the Constituency Tree-LSTM model in the Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the Bi-LSTM+2+5 model in the Leveraging Multi-grained Sentiment Lexicon Information for Neural Sequence Models paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the MPAD-path model in the Message Passing Attention Networks for Document Understanding paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the Epic model in the Less Grammar, More Features paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the RNN-Capsule model in the Sentiment Analysis by Capsules paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the C-LSTM model in the A C-LSTM Neural Network for Text Classification paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the STM+TSED+PT+2L model in the The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning paper on the SST-5 Fine-grained classification dataset? | Accuracy |
What metrics were used to measure the SWEM-concat model in the Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms paper on the SST-5 Fine-grained classification dataset? | Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.