prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the RCNN model in the Sentiment analysis for Urdu online reviews using deep learning models paper on the Urdu Online Reviews dataset? | Average F1 |
What metrics were used to measure the fastText, h=10, bigram model in the Bag of Tricks for Efficient Text Classification paper on the Sogou News dataset? | Accuracy |
What metrics were used to measure the xlmindic-base-uniscript model in the Does Transliteration Help Multilingual Language Modeling? paper on the IITP Product Reviews Sentiment dataset? | Accuracy |
What metrics were used to measure the xlmindic-base-multiscript model in the Does Transliteration Help Multilingual Language Modeling? paper on the IITP Product Reviews Sentiment dataset? | Accuracy |
What metrics were used to measure the IndicBERT Base model in the IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages paper on the IITP Product Reviews Sentiment dataset? | Accuracy |
What metrics were used to measure the BERT large model in the Unsupervised Data Augmentation for Consistency Training paper on the Amazon Review Polarity dataset? | Accuracy |
What metrics were used to measure the DPCNN model in the Deep Pyramid Convolutional Neural Networks for Text Categorization paper on the Amazon Review Polarity dataset? | Accuracy |
What metrics were used to measure the BERT large finetune UDA model in the Unsupervised Data Augmentation for Consistency Training paper on the Amazon Review Polarity dataset? | Accuracy |
What metrics were used to measure the DRNN model in the Disconnected Recurrent Neural Networks for Text Categorization paper on the Amazon Review Polarity dataset? | Accuracy |
What metrics were used to measure the EXAM model in the Explicit Interaction Model towards Text Classification paper on the Amazon Review Polarity dataset? | Accuracy |
What metrics were used to measure the SRNN model in the Sliced Recurrent Neural Networks paper on the Amazon Review Polarity dataset? | Accuracy |
What metrics were used to measure the CCCapsNet model in the Compositional Coding Capsule Network with K-Means Routing for Text Classification paper on the Amazon Review Polarity dataset? | Accuracy |
What metrics were used to measure the FastText model in the Bag of Tricks for Efficient Text Classification paper on the Amazon Review Polarity dataset? | Accuracy |
What metrics were used to measure the Gumbel+bi-leaf-RNN model in the On Tree-Based Neural Sentence Modeling paper on the Amazon Review Polarity dataset? | Accuracy |
What metrics were used to measure the XLNet model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the Heinsen Routing + RoBERTa Large model in the An Algorithm for Routing Vectors in Sequences paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the EFL model in the Entailment as Few-Shot Learner paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the GraphStar model in the Graph Star Net for Generalized Multi-Task Learning paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the DV-ngrams-cosine with NB sub-sampling + RoBERTa.base model in the The Document Vectors Using Cosine Similarity Revisited paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the DV-ngrams-cosine + RoBERTa.base model in the The Document Vectors Using Cosine Similarity Revisited paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the BERT large finetune UDA model in the Unsupervised Data Augmentation for Consistency Training paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the BERT_large+ITPT model in the How to Fine-Tune BERT for Text Classification? paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the RoBERTa.base model in the The Document Vectors Using Cosine Similarity Revisited paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the L MIXED model in the Revisiting LSTM Networks for Semi-Supervised Text Classification via Mixed Objective Function paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the BERT_base+ITPT model in the How to Fine-Tune BERT for Text Classification? paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the BERT large model in the Unsupervised Data Augmentation for Consistency Training paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the ULMFiT model in the Universal Language Model Fine-tuning for Text Classification paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the Block-sparse LSTM model in the GPU Kernels for Block-Sparse Weights paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the CEN-tpc model in the Contextual Explanation Networks paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the oh-LSTM model in the Supervised and Semi-Supervised Text Categorization using LSTM for Region Embeddings paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the Virtual adversarial training model in the Adversarial Training Methods for Semi-Supervised Text Classification paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the DV-ngrams-cosine + NB-weighted BON (re-evaluated) model in the The Document Vectors Using Cosine Similarity Revisited paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the Nyströmformer model in the Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the Modified LMU (34M) model in the Parallelizing Legendre Memory Unit Training paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the DV-ngrams-cosine model in the Sentiment Classification Using Document Embeddings Trained with Cosine Similarity paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the OCaTS (kNN & GPT-3.5-turbo model in the Cache me if you Can: an Online Cost-aware Teacher-Student framework to Reduce the Calls to Large Language Models paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the DistilBERT model in the DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the seq2-bown-CNN model in the Effective Use of Word Order for Text Categorization with Convolutional Neural Networks paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the BP-Transformer + GloVe model in the BP-Transformer: Modelling Long-Range Context via Binary Partitioning paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the BCN+Char+CoVe model in the Learned in Translation: Contextualized Word Vectors paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the ToWE-SG model in the Task-oriented Word Embedding for Text Classification paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the LSTM with dynamic skip model in the Long Short-Term Memory with Dynamic Skip Connections paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the CNN+LSTM model in the On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the UnICORNN model in the UnICORNN: A recurrent model for learning very long time dependencies paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the CfC model in the Closed-form Continuous-time Neural Models paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the Doc2VecC model in the Efficient Vector Representation for Documents through Corruption paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the coRNN model in the Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the S-LSTM model in the Sentence-State LSTM for Text Representation paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the AlexNet [alexnet] model in the Classifying Textual Data with Pre-trained Vision Models through Transfer Learning and Data Transformations paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the model in the Classifying Textual Data with Pre-trained Vision Models through Transfer Learning and Data Transformations paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the VGG16 [vgg16] model in the Classifying Textual Data with Pre-trained Vision Models through Transfer Learning and Data Transformations paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the ResNext[resnext] model in the Classifying Textual Data with Pre-trained Vision Models through Transfer Learning and Data Transformations paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the Standard DR-AGG model in the Information Aggregation via Dynamic Routing for Sequence Encoding paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the Reverse DR-AGG model in the Information Aggregation via Dynamic Routing for Sequence Encoding paper on the IMDb dataset? | Accuracy |
What metrics were used to measure the VLAWE model in the Vector of Locally-Aggregated Word Embeddings (VLAWE): A Novel Document-level Representation paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the EFL model in the Entailment as Few-Shot Learner paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the AnglE-LLaMA-7B model in the AnglE-optimized Text Embeddings paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the byte mLSTM7 model in the A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the MEAN model in the A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the RNN-Capsule model in the Sentiment Analysis by Capsules paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the Capsule-B model in the Investigating Capsule Networks with Dynamic Routing for Text Classification paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the SuBiLSTM-Tied model in the Improved Sentence Modeling using Suffix Bidirectional LSTM paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the USE_T+CNN model in the Universal Sentence Encoder paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the STM+TSED+PT+2L model in the The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the GRU-RNN-WORD2VEC model in the All-but-the-Top: Simple and Effective Postprocessing for Word Representations paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the SWEM-concat model in the Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the TM-Glove model in the Enhancing Interpretable Clauses Semantically using Pretrained Word Representation paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the Text GCN model in the Graph Convolutional Networks for Text Classification paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the GraphStar model in the Graph Star Net for Generalized Multi-Task Learning paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the S-LSTM model in the Sentence-State LSTM for Text Representation paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the SGC model in the Simplifying Graph Convolutional Networks paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the SGCN model in the Simplifying Graph Convolutional Networks paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the Millions of Emoji model in the Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm paper on the MR dataset? | Accuracy, Training Time |
What metrics were used to measure the k-RoBERTa (parallel) model in the Incorporating Multiple Knowledge Sources for Targeted Aspect-based Financial Sentiment Analysis paper on the FiQA dataset? | MSE, R^2 |
What metrics were used to measure the FinBERT model in the FinBERT: Financial Sentiment Analysis with Pre-trained Language Models paper on the FiQA dataset? | MSE, R^2 |
What metrics were used to measure the Deep Representations model in the Financial Aspect-Based Sentiment Analysis using Deep Representations paper on the FiQA dataset? | MSE, R^2 |
What metrics were used to measure the Deep Neural Networks (DNN) model in the Financial Aspect and Sentiment Predictions with Deep Neural Networks: An Ensemble Approach paper on the FiQA dataset? | MSE, R^2 |
What metrics were used to measure the XLNet model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the BERT_large+ITPT model in the How to Fine-Tune BERT for Text Classification? paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the BERT large model in the Unsupervised Data Augmentation for Consistency Training paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the BERT_base+ITPT model in the How to Fine-Tune BERT for Text Classification? paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the ULMFiT model in the Universal Language Model Fine-tuning for Text Classification paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the DPCNN model in the Deep Pyramid Convolutional Neural Networks for Text Categorization paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the DRNN model in the Disconnected Recurrent Neural Networks for Text Categorization paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the BERT large finetune UDA model in the Unsupervised Data Augmentation for Consistency Training paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the CNN model in the Supervised and Semi-Supervised Text Categorization using LSTM for Region Embeddings paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the BiLSTM generalized pooling model in the Enhancing Sentence Embedding with Generalized Pooling paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the CCCapsNet model in the Compositional Coding Capsule Network with K-Means Routing for Text Classification paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the DNC+CUW model in the Learning to Remember More with Less Memorization paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the LEAM model in the Joint Embedding of Words and Labels for Text Classification paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the FastText model in the Bag of Tricks for Efficient Text Classification paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the SWEM-hier model in the Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the Char-level CNN model in the Character-level Convolutional Networks for Text Classification paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the SVDCNN model in the Squeezed Very Deep Convolutional Neural Networks for Text Classification paper on the Yelp Fine-grained classification dataset? | Error |
What metrics were used to measure the BERTweet model in the BERTweet: A pre-trained language model for English Tweets paper on the TweetEval dataset? | Emoji, Emotion, Hate, Irony, Offensive, Sentiment, Stance, ALL, Accuracy, Macro F1, Weighted F1, f1 |
What metrics were used to measure the RoB-RT model in the XLM-T: Multilingual Language Models in Twitter for Sentiment Analysis and Beyond paper on the TweetEval dataset? | Emoji, Emotion, Hate, Irony, Offensive, Sentiment, Stance, ALL, Accuracy, Macro F1, Weighted F1, f1 |
What metrics were used to measure the RoBERTa-Base model in the TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification paper on the TweetEval dataset? | Emoji, Emotion, Hate, Irony, Offensive, Sentiment, Stance, ALL, Accuracy, Macro F1, Weighted F1, f1 |
What metrics were used to measure the RoBERTa-Twitter model in the TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification paper on the TweetEval dataset? | Emoji, Emotion, Hate, Irony, Offensive, Sentiment, Stance, ALL, Accuracy, Macro F1, Weighted F1, f1 |
What metrics were used to measure the SVM model in the TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification paper on the TweetEval dataset? | Emoji, Emotion, Hate, Irony, Offensive, Sentiment, Stance, ALL, Accuracy, Macro F1, Weighted F1, f1 |
What metrics were used to measure the FastText model in the TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification paper on the TweetEval dataset? | Emoji, Emotion, Hate, Irony, Offensive, Sentiment, Stance, ALL, Accuracy, Macro F1, Weighted F1, f1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.