prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the Text-Transformers + Five-fold five model cross-validation +Pseudo Label Algorithm model in the Exploring Text-transformers in AAAI 2021 Shared Task: COVID-19 Fake News Detection in English paper on the Grover-Mega dataset?
Unpaired Accuracy
What metrics were used to measure the Grover-Mega model in the Defending Against Neural Fake News paper on the Grover-Mega dataset?
Unpaired Accuracy
What metrics were used to measure the Grover-Large model in the Defending Against Neural Fake News paper on the Grover-Mega dataset?
Unpaired Accuracy
What metrics were used to measure the BERT-Large model in the Defending Against Neural Fake News paper on the Grover-Mega dataset?
Unpaired Accuracy
What metrics were used to measure the GPT2 (355M) model in the Defending Against Neural Fake News paper on the Grover-Mega dataset?
Unpaired Accuracy
What metrics were used to measure the Auxiliary IndicBert model in the Hostility Detection in Hindi leveraging Pre-Trained Language Models paper on the Hostility Detection Dataset in Hindi dataset?
F1 score
What metrics were used to measure the SEMI-FND model in the SEMI-FND: Stacked Ensemble Based Multimodal Inference For Faster Fake News Detection paper on the MediaEval2016 dataset?
Accuracy
What metrics were used to measure the TextRNN model in the Exploring Text-transformers in AAAI 2021 Shared Task: COVID-19 Fake News Detection in English paper on the Social media dataset?
Accuracy
What metrics were used to measure the MUStARD++ model in the A Multimodal Corpus for Emotion Recognition in Sarcasm paper on the MUStARD++ dataset?
Precision, Recall, F1
What metrics were used to measure the PaLM 2(few-shot, k=3, CoT) model in the PaLM 2 Technical Report paper on the BIG-bench (SNARKS) dataset?
Accuracy
What metrics were used to measure the PaLM 2 (few-shot, k=3, Direct) model in the PaLM 2 Technical Report paper on the BIG-bench (SNARKS) dataset?
Accuracy
What metrics were used to measure the PaLM 540B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (SNARKS) dataset?
Accuracy
What metrics were used to measure the BLOOM 176B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (SNARKS) dataset?
Accuracy
What metrics were used to measure the Bloomberg GPT (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (SNARKS) dataset?
Accuracy
What metrics were used to measure the GPT-NeoX (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (SNARKS) dataset?
Accuracy
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (SNARKS) dataset?
Accuracy
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (SNARKS) dataset?
Accuracy
What metrics were used to measure the RoBERTa + Mutation Data Augmentation model in the UTNLP at SemEval-2022 Task 6: A Comparative Analysis of Sarcasm Detection Using Generative-based and Mutation-based Data Augmentation paper on the iSarcasm dataset?
F1-Score
What metrics were used to measure the Bag-of-Bigrams model in the A Large Self-Annotated Corpus for Sarcasm paper on the SARC (pol-bal) dataset?
Accuracy
What metrics were used to measure the CASCADE model in the CASCADE: Contextual Sarcasm Detection in Online Discussion Forums paper on the SARC (pol-bal) dataset?
Accuracy
What metrics were used to measure the CASCADE model in the CASCADE: Contextual Sarcasm Detection in Online Discussion Forums paper on the SARC (all-bal) dataset?
Accuracy
What metrics were used to measure the Bag-of-Bigrams model in the A Large Self-Annotated Corpus for Sarcasm paper on the SARC (all-bal) dataset?
Accuracy
What metrics were used to measure the RoBERTa_large (Context-Response) model in the Sarcasm Detection using Context Separators in Online Discourse paper on the FigLang 2020 Twitter Dataset dataset?
F1
What metrics were used to measure the BERT model in the Applying Transformers and Aspect-based Sentiment Analysis approaches on Sarcasm Detection paper on the FigLang 2020 Twitter Dataset dataset?
F1
What metrics were used to measure the BART model in the When did you become so smart, oh wise one?! Sarcasm Explanation in Multi-modal Multi-party Dialogues paper on the WITS dataset?
R1
What metrics were used to measure the Bag-of-Words model in the A Large Self-Annotated Corpus for Sarcasm paper on the SARC (pol-unbal) dataset?
Avg F1
What metrics were used to measure the BERT+Aspect-based approaches model in the Applying Transformers and Aspect-based Sentiment Analysis approaches on Sarcasm Detection paper on the FigLang 2020 Reddit Dataset dataset?
F1
What metrics were used to measure the RoBERTa_large - (Separated Context-Response) model in the Sarcasm Detection using Context Separators in Online Discourse paper on the FigLang 2020 Reddit Dataset dataset?
F1
What metrics were used to measure the Multilingual BERT model in the Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis paper on the ToLD-Br dataset?
F1-score
What metrics were used to measure the AutoML model in the Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis paper on the ToLD-Br dataset?
F1-score
What metrics were used to measure the BiLSTM + static BE model in the Hate speech detection using static BERT embeddings paper on the Ethos Binary dataset?
F1-score, Classification Accuracy, Precision
What metrics were used to measure the BERT model in the ETHOS: an Online Hate Speech Detection Dataset paper on the Ethos Binary dataset?
F1-score, Classification Accuracy, Precision
What metrics were used to measure the BiLSTM+Attention+FT model in the ETHOS: an Online Hate Speech Detection Dataset paper on the Ethos Binary dataset?
F1-score, Classification Accuracy, Precision
What metrics were used to measure the OPT-175B (few-shot) model in the OPT: Open Pre-trained Transformer Language Models paper on the Ethos Binary dataset?
F1-score, Classification Accuracy, Precision
What metrics were used to measure the CNN+Attention+FT+GV model in the ETHOS: an Online Hate Speech Detection Dataset paper on the Ethos Binary dataset?
F1-score, Classification Accuracy, Precision
What metrics were used to measure the OPT-175B (one-shot) model in the OPT: Open Pre-trained Transformer Language Models paper on the Ethos Binary dataset?
F1-score, Classification Accuracy, Precision
What metrics were used to measure the OPT-175B (zero-shot) model in the OPT: Open Pre-trained Transformer Language Models paper on the Ethos Binary dataset?
F1-score, Classification Accuracy, Precision
What metrics were used to measure the SVM model in the ETHOS: an Online Hate Speech Detection Dataset paper on the Ethos Binary dataset?
F1-score, Classification Accuracy, Precision
What metrics were used to measure the Random Forests model in the ETHOS: an Online Hate Speech Detection Dataset paper on the Ethos Binary dataset?
F1-score, Classification Accuracy, Precision
What metrics were used to measure the Davinci (zero-shot) model in the OPT: Open Pre-trained Transformer Language Models paper on the Ethos Binary dataset?
F1-score, Classification Accuracy, Precision
What metrics were used to measure the Davinci (one-shot) model in the OPT: Open Pre-trained Transformer Language Models paper on the Ethos Binary dataset?
F1-score, Classification Accuracy, Precision
What metrics were used to measure the Davinci (few-shot) model in the OPT: Open Pre-trained Transformer Language Models paper on the Ethos Binary dataset?
F1-score, Classification Accuracy, Precision
What metrics were used to measure the Mozafari et al., 2019 model in the AAA: Fair Evaluation for Abuse Detection Systems Wanted paper on the Waseem et al., 2018 dataset?
AAA, F1 (micro)
What metrics were used to measure the SVM model in the AAA: Fair Evaluation for Abuse Detection Systems Wanted paper on the Waseem et al., 2018 dataset?
AAA, F1 (micro)
What metrics were used to measure the Kennedy et al., 2020 model in the AAA: Fair Evaluation for Abuse Detection Systems Wanted paper on the Waseem et al., 2018 dataset?
AAA, F1 (micro)
What metrics were used to measure the mBert model in the Deep Learning Models for Multilingual Hate Speech Detection paper on the Automatic Misogynistic Identification dataset?
Accuracy
What metrics were used to measure the Logistic Regression model in the Hateminers : Detecting Hate speech against Women paper on the Automatic Misogynistic Identification dataset?
Accuracy
What metrics were used to measure the BERT-MRP model in the Why Is It Hate Speech? Masked Rationale Prediction for Explainable Hate Speech Detection paper on the HateXplain dataset?
AUROC, Macro F1, Accuracy
What metrics were used to measure the BERT-RP model in the Why Is It Hate Speech? Masked Rationale Prediction for Explainable Hate Speech Detection paper on the HateXplain dataset?
AUROC, Macro F1, Accuracy
What metrics were used to measure the BERT-HateXplain [Attn] model in the HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection paper on the HateXplain dataset?
AUROC, Macro F1, Accuracy
What metrics were used to measure the BERT-HateXplain [LIME] model in the HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection paper on the HateXplain dataset?
AUROC, Macro F1, Accuracy
What metrics were used to measure the BERT [Attn] model in the HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection paper on the HateXplain dataset?
AUROC, Macro F1, Accuracy
What metrics were used to measure the BiRNN-HateXplain [Attn] model in the HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection paper on the HateXplain dataset?
AUROC, Macro F1, Accuracy
What metrics were used to measure the BiRNN-Attn [Attn] model in the HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection paper on the HateXplain dataset?
AUROC, Macro F1, Accuracy
What metrics were used to measure the CNN-GRU [LIME] model in the HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection paper on the HateXplain dataset?
AUROC, Macro F1, Accuracy
What metrics were used to measure the BiRNN [LIME] model in the HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection paper on the HateXplain dataset?
AUROC, Macro F1, Accuracy
What metrics were used to measure the Baseline BERT (task A) model in the Detecting Abusive Albanian paper on the SHAJ dataset?
F1
What metrics were used to measure the AOM mBERT model in the Annotating Online Misogyny paper on the bajer_danish_misogyny dataset?
F1
What metrics were used to measure the Auxiliary IndicBert model in the Hostility Detection in Hindi leveraging Pre-Trained Language Models paper on the Hostility Detection Dataset in Hindi dataset?
F1 score
What metrics were used to measure the HateBERT model in the HateBERT: Retraining BERT for Abusive Language Detection in English paper on the AbusEval dataset?
Macro F1
What metrics were used to measure the BERT model in the HateBERT: Retraining BERT for Abusive Language Detection in English paper on the AbusEval dataset?
Macro F1
What metrics were used to measure the HateBERT model in the HateBERT: Retraining BERT for Abusive Language Detection in English paper on the HatEval dataset?
Macro F1
What metrics were used to measure the BERT model in the HateBERT: Retraining BERT for Abusive Language Detection in English paper on the HatEval dataset?
Macro F1
What metrics were used to measure the MLARAM model in the ETHOS: an Online Hate Speech Detection Dataset paper on the Ethos MultiLabel dataset?
Hamming Loss
What metrics were used to measure the MLkNN model in the ETHOS: an Online Hate Speech Detection Dataset paper on the Ethos MultiLabel dataset?
Hamming Loss
What metrics were used to measure the Binary Relevance model in the ETHOS: an Online Hate Speech Detection Dataset paper on the Ethos MultiLabel dataset?
Hamming Loss
What metrics were used to measure the Neural Classifier Chains model in the ETHOS: an Online Hate Speech Detection Dataset paper on the Ethos MultiLabel dataset?
Hamming Loss
What metrics were used to measure the Neural Binary Relevance model in the ETHOS: an Online Hate Speech Detection Dataset paper on the Ethos MultiLabel dataset?
Hamming Loss
What metrics were used to measure the Baseline model in the Offensive Language and Hate Speech Detection for Danish paper on the DKhate dataset?
F1
What metrics were used to measure the HateBERT model in the HateBERT: Retraining BERT for Abusive Language Detection in English paper on the OffensEval 2019 dataset?
Macro F1
What metrics were used to measure the BERT model in the HateBERT: Retraining BERT for Abusive Language Detection in English paper on the OffensEval 2019 dataset?
Macro F1
What metrics were used to measure the RoBERTa-large-ST model in the Noisy Self-Training with Data Augmentations for Offensive and Hate Speech Detection Tasks paper on the OLID dataset?
Macro F1
What metrics were used to measure the MTMT-Net model in the A Multi-Task Mean Teacher for Semi-Supervised Shadow Detection paper on the SBU dataset?
BER
What metrics were used to measure the DSDNet model in the Distraction-Aware Shadow Detection paper on the SBU dataset?
BER
What metrics were used to measure the BDRAR model in the Bidirectional Feature Pyramid Network with Recurrent Attention Residual Modules for Shadow Detection paper on the SBU dataset?
BER
What metrics were used to measure the A+DNet model in the A+D Net: Training a Shadow Detector with Adversarial Shadow Attenuation paper on the SBU dataset?
BER
What metrics were used to measure the DSC model in the Direction-aware Spatial Context Features for Shadow Detection paper on the SBU dataset?
BER
What metrics were used to measure the ST-CGAN model in the Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal paper on the SBU dataset?
BER
What metrics were used to measure the scGAN model in the paper on the SBU dataset?
BER
What metrics were used to measure the stacked-CNN model in the paper on the SBU dataset?
BER
What metrics were used to measure the patched-CNN model in the Fast Shadow Detection from a Single Image Using a Patched Convolutional Neural Network paper on the SBU dataset?
BER
What metrics were used to measure the BDN model in the Omnidirectional Scene Text Detection with Sequential-free Box Discretization paper on the IC19-ReCTs dataset?
F-Measure
What metrics were used to measure the PMTD* model in the Pyramid Mask Text Detector paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the Corner Localization (single-scale) model in the Multi-Oriented Scene Text Detection via Corner Localization and Region Segmentation paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the SBD model in the Exploring the Capacity of an Orderless Box Discretization Network for Multi-orientation Scene Text Detection paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the FOTS MS model in the FOTS: Fast Oriented Text Spotting with a Unified Network paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the CharNet H-88 model in the Convolutional Character Networks paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the FOTS model in the FOTS: Fast Oriented Text Spotting with a Unified Network paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the SPCNET model in the Scene Text Detection with Supervised Pyramid Context Network paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the CRAFT model in the Character Region Awareness for Text Detection paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the PAN model in the Mask R-CNN with Pyramid Attention Network for Scene Text Detection paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the GNNets model in the Geometry Normalization Networks for Accurate Scene Text Detection paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the CharNet R-50 model in the Convolutional Character Networks paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the PSENet-1s model in the Shape Robust Text Detection with Progressive Scale Expansion Network paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the PSENet (ResNet-152) model in the Shape Robust Text Detection with Progressive Scale Expansion Network paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the Corner Localization (multi-scale) model in the Multi-Oriented Scene Text Detection via Corner Localization and Region Segmentation paper on the ICDAR 2017 MLT dataset?
Precision, Recall, F-Measure, H-Mean
What metrics were used to measure the TextFuseNet (ResNeXt-101) model in the TextFuseNet: Scene Text Detection with Richer Fused Features paper on the ICDAR 2015 dataset?
F-Measure, Precision, Recall, Accuracy, FPS
What metrics were used to measure the CharNet H-88 (multi-scale) model in the Convolutional Character Networks paper on the ICDAR 2015 dataset?
F-Measure, Precision, Recall, Accuracy, FPS
What metrics were used to measure the CharNet H-88 (single-scale) model in the Convolutional Character Networks paper on the ICDAR 2015 dataset?
F-Measure, Precision, Recall, Accuracy, FPS
What metrics were used to measure the CharNet H-50 (multi-scale) model in the Convolutional Character Networks paper on the ICDAR 2015 dataset?
F-Measure, Precision, Recall, Accuracy, FPS