prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Pyramid Feature Attention model in the Pyramid Feature Attention Network for Saliency detection paper on the PASCAL-S dataset? | MAE |
What metrics were used to measure the GTAN model in the Semi-supervised Credit Card Fraud Detection via Attribute-driven Graph Representation paper on the Amazon-Fraud dataset? | AUC-ROC, Averaged Precision |
What metrics were used to measure the RLC-GNN model in the RLC-GNN: An Improved Deep Architecture for Spatial-Based Graph Neural Network with Application to Fraud Detection paper on the Amazon-Fraud dataset? | AUC-ROC, Averaged Precision |
What metrics were used to measure the RioGNN model in the Reinforced Neighborhood Selection Guided Multi-Relational Graph Neural Networks paper on the Amazon-Fraud dataset? | AUC-ROC, Averaged Precision |
What metrics were used to measure the PC-GNN model in the Pick and Choose: A GNN-based Imbalanced Learning Approach for Fraud Detection paper on the Amazon-Fraud dataset? | AUC-ROC, Averaged Precision |
What metrics were used to measure the CARE-GNN model in the Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged Fraudsters paper on the Amazon-Fraud dataset? | AUC-ROC, Averaged Precision |
What metrics were used to measure the GTAN model in the Semi-supervised Credit Card Fraud Detection via Attribute-driven Graph Representation paper on the Yelp-Fraud dataset? | AUC-ROC, Averaged Precision |
What metrics were used to measure the BOLT-GRAPH model in the BOLT: An Automated Deep Learning Framework for Training and Deploying Large-Scale Search and Recommendation Models on Commodity CPU Hardware paper on the Yelp-Fraud dataset? | AUC-ROC, Averaged Precision |
What metrics were used to measure the SplitGNN model in the SplitGNN: Spectral Graph Neural Network for Fraud Detection against Heterophily paper on the Yelp-Fraud dataset? | AUC-ROC, Averaged Precision |
What metrics were used to measure the GAT+JK model in the New Benchmarks for Learning on Non-Homophilous Graphs paper on the Yelp-Fraud dataset? | AUC-ROC, Averaged Precision |
What metrics were used to measure the RLC-GNN model in the RLC-GNN: An Improved Deep Architecture for Spatial-Based Graph Neural Network with Application to Fraud Detection paper on the Yelp-Fraud dataset? | AUC-ROC, Averaged Precision |
What metrics were used to measure the RioGNN model in the Reinforced Neighborhood Selection Guided Multi-Relational Graph Neural Networks paper on the Yelp-Fraud dataset? | AUC-ROC, Averaged Precision |
What metrics were used to measure the PC-GNN model in the Pick and Choose: A GNN-based Imbalanced Learning Approach for Fraud Detection paper on the Yelp-Fraud dataset? | AUC-ROC, Averaged Precision |
What metrics were used to measure the CARE-GNN model in the Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged Fraudsters paper on the Yelp-Fraud dataset? | AUC-ROC, Averaged Precision |
What metrics were used to measure the DevNet model in the Deep Anomaly Detection with Deviation Networks paper on the Kaggle-Credit Card Fraud Dataset dataset? | AUC, Accuracy, Average Precision |
What metrics were used to measure the XBNET model in the XBNet : An Extremely Boosted Neural Network paper on the Kaggle-Credit Card Fraud Dataset dataset? | AUC, Accuracy, Average Precision |
What metrics were used to measure the SplitGNN model in the SplitGNN: Spectral Graph Neural Network for Fraud Detection against Heterophily paper on the FDCompCN dataset? | AUC-ROC |
What metrics were used to measure the MSTREAM-PCA model in the MSTREAM: Fast Anomaly Detection in Multi-Aspect Streams paper on the CIC-DDoS dataset? | AUC |
What metrics were used to measure the intrusion detection model in the A Neural Network Architecture Combining Gated Recurrent Unit (GRU) and Support Vector Machine (SVM) for Intrusion Detection in Network Traffic Data paper on the 20NewsGroups dataset? | Actions Top-1 (S2) |
What metrics were used to measure the MSTREAM-IB model in the MSTREAM: Fast Anomaly Detection in Multi-Aspect Streams paper on the CIC-DoS dataset? | AUC |
What metrics were used to measure the MSTREAM-AE model in the MSTREAM: Fast Anomaly Detection in Multi-Aspect Streams paper on the UNSW-NB15 dataset? | AUC |
What metrics were used to measure the ASVDD model in the Automatic support vector data description paper on the Breast cancer Wisconsin_class 4 dataset? | Average Accuracy |
What metrics were used to measure the MIX model in the MIX: A Joint Learning Framework for Detecting Both Clustered and Scattered Outliers in Mixed-Type Data paper on the Internet Ad dataset? | AUC-ROC |
What metrics were used to measure the VRAE+SVM model in the Learning Representations from Healthcare Time Series Data for Unsupervised Anomaly Detection paper on the ECG5000 dataset? | Accuracy |
What metrics were used to measure the F-t ALSTM-FCN model in the LSTM Fully Convolutional Networks for Time Series Classification paper on the ECG5000 dataset? | Accuracy |
What metrics were used to measure the GENDIS model in the GENDIS: GENetic DIscovery of Shapelets paper on the ECG5000 dataset? | Accuracy |
What metrics were used to measure the MIX model in the MIX: A Joint Learning Framework for Detecting Both Clustered and Scattered Outliers in Mixed-Type Data paper on the Hepatitis dataset? | AUC-ROC |
What metrics were used to measure the ASVDD model in the Automatic support vector data description paper on the Breast cancer Wisconsin_class 2 dataset? | Average Accuracy |
What metrics were used to measure the ASVDD model in the Automatic support vector data description paper on the Balance scale_class 1 dataset? | Average Accuracy |
What metrics were used to measure the MIX model in the MIX: A Joint Learning Framework for Detecting Both Clustered and Scattered Outliers in Mixed-Type Data paper on the Heart-C dataset? | AUC |
What metrics were used to measure the ASVDD model in the Automatic support vector data description paper on the Ionosphere_class b dataset? | Average Accuracy |
What metrics were used to measure the PAE model in the Probabilistic Autoencoder paper on the Fashion-MNIST dataset? | AUROC |
What metrics were used to measure the ASVDD model in the Automatic support vector data description paper on the Glass identification dataset? | Average Accuracy |
What metrics were used to measure the LSTMCaps model in the Hybridization of Capsule and LSTM Networks for unsupervised anomaly detection on multivariate data paper on the SKAB dataset? | Average F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the poldeb dataset? | F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the mtsd dataset? | F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the Perspectrum dataset? | F1 |
What metrics were used to measure the BERT model in the StEduCov: An Explored and Benchmarked Dataset on Stance Detection in Tweets towards Online Education during COVID-19 Pandemic paper on the STEDUCOV: A DATASET ON STANCE DETECTION IN TWEETS TOWARDS ONLINE EDUCATION DURING COVID-19 PANDEMIC dataset? | Average F1, Accuracy (10-fold) |
What metrics were used to measure the MUSE + UMAP (Unsupervised) model in the Embeddings-Based Clustering for Target Specific Stances: The Case of a Polarized Turkey paper on the Trump Midterm Elections 2018 dataset? | Avg F1, Macro Precision, Macro Recall |
What metrics were used to measure the Kochkina et al. 2017 model in the Turing at SemEval-2017 Task 8: Sequential Approach to Rumour Stance Classification with Branch-LSTM paper on the RumourEval dataset? | Accuracy, F1 |
What metrics were used to measure the Bahuleyan and Vechtomova 2017 model in the UWaterloo at SemEval-2017 Task 8: Detecting Stance towards Rumours with Topic Independent Features paper on the RumourEval dataset? | Accuracy, F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the RumourEval dataset? | Accuracy, F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the wtwt dataset? | F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the Snopes dataset? | F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the SemEval 2016 dataset? | F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the FNC-1 dataset? | F1 |
What metrics were used to measure the RGT model in the MGTAB: A Multi-Relational Graph-Based Twitter Account Detection Benchmark paper on the MGTAB dataset? | Acc, F1 |
What metrics were used to measure the Simple-HGN model in the MGTAB: A Multi-Relational Graph-Based Twitter Account Detection Benchmark paper on the MGTAB dataset? | Acc, F1 |
What metrics were used to measure the GCN model in the MGTAB: A Multi-Relational Graph-Based Twitter Account Detection Benchmark paper on the MGTAB dataset? | Acc, F1 |
What metrics were used to measure the GAT model in the MGTAB: A Multi-Relational Graph-Based Twitter Account Detection Benchmark paper on the MGTAB dataset? | Acc, F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the ibmcs dataset? | F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the ARC dataset? | F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the SemEval 2019 dataset? | F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the emergent dataset? | F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the argmin dataset? | F1 |
What metrics were used to measure the Boosting model in the Stance Prediction for Russian: Data and Analysis paper on the RuStance dataset? | F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the VAST dataset? | F1 |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the SCD dataset? | F1 |
What metrics were used to measure the MUSE + UMAP (Unsupervised) model in the Embeddings-Based Clustering for Target Specific Stances: The Case of a Polarized Turkey paper on the Turkish Elections 2018 dataset? | Avg F1, Macro Precision, Macro Recall |
What metrics were used to measure the TESTED model in the Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection paper on the iac1 dataset? | F1 |
What metrics were used to measure the CASENet model in the CASENet: Deep Category-Aware Semantic Edge Detection paper on the Cityscapes test dataset? | AP, Maximum F-measure |
What metrics were used to measure the RCN model in the Object Contour and Edge Detection with RefineContourNet paper on the BSDS500 dataset? | F1 |
What metrics were used to measure the DexiNed model in the Dense Extreme Inception Network for Edge Detection paper on the BIPED dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the BDCN model in the Bi-Directional Cascade Network for Perceptual Edge Detection paper on the BIPED dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the LDC model in the LDC: Lightweight Dense CNN for Edge Detection paper on the BIPED dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the CATS model in the Unmixing Convolutional Features for Crisp Edge Detection paper on the BIPED dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the RCF model in the Richer Convolutional Features for Edge Detection paper on the BIPED dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the TEED model in the Tiny and Efficient Model for the Edge Detection Generalization paper on the UDED dataset? | ODS |
What metrics were used to measure the LDC model in the LDC: Lightweight Dense CNN for Edge Detection paper on the UDED dataset? | ODS |
What metrics were used to measure the DexiNed model in the Dense Extreme Inception Network for Edge Detection paper on the UDED dataset? | ODS |
What metrics were used to measure the PiDiNet model in the Pixel Difference Networks for Efficient Edge Detection paper on the UDED dataset? | ODS |
What metrics were used to measure the DexiNed (WACV'2020) model in the Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection paper on the CID dataset? | ODS |
What metrics were used to measure the SED model in the paper on the CID dataset? | ODS |
What metrics were used to measure the LDC model in the LDC: Lightweight Dense CNN for Edge Detection paper on the BRIND dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the PiDiNet model in the Pixel Difference Networks for Efficient Edge Detection paper on the BRIND dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the BDCN model in the Bi-Directional Cascade Network for Perceptual Edge Detection paper on the BRIND dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the CASENet model in the CASENet: Deep Category-Aware Semantic Edge Detection paper on the SBD dataset? | Maximum F-measure |
What metrics were used to measure the WSOB model in the Weakly Supervised Object Boundaries paper on the SBD dataset? | Maximum F-measure |
What metrics were used to measure the DexiNed-a model in the Dense Extreme Inception Network for Edge Detection paper on the MDBD dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the DexiNed-f model in the Dense Extreme Inception Network for Edge Detection paper on the MDBD dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the CATS model in the Unmixing Convolutional Features for Crisp Edge Detection paper on the MDBD dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the BDCN model in the Bi-Directional Cascade Network for Perceptual Edge Detection paper on the MDBD dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the LDC model in the LDC: Lightweight Dense CNN for Edge Detection paper on the MDBD dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the RCF model in the Richer Convolutional Features for Edge Detection paper on the MDBD dataset? | ODS, Number of parameters (M) |
What metrics were used to measure the CASTANET+ Ensemble model in the Generic Event Boundary Detection Challenge at CVPR 2021 Technical Report: Cascaded Temporal Attention Network (CASTANET) paper on the Kinetics-400 dataset? | Pairwise F1, Precision, Recall |
What metrics were used to measure the InvPT model in the InvPT: Inverted Pyramid Multi-task Transformer for Dense Scene Understanding paper on the NYU-Depth V2 dataset? | odsF |
What metrics were used to measure the InvPT model in the InvPT: Inverted Pyramid Multi-task Transformer for Dense Scene Understanding paper on the PASCAL Context dataset? | odsF |
What metrics were used to measure the Xu et al. model in the An Orthogonal Classifier for Improving the Adversarial Robustness of Neural Networks paper on the CIFAR-10 dataset? | Attack: PGD20, Attack: AutoAttack, Attack: DeepFool |
What metrics were used to measure the AdvTraining [madry2018] model in the Towards Deep Learning Models Resistant to Adversarial Attacks paper on the CIFAR-10 dataset? | Attack: PGD20, Attack: AutoAttack, Attack: DeepFool |
What metrics were used to measure the TRADES [zhang2019b] model in the Theoretically Principled Trade-off between Robustness and Accuracy paper on the CIFAR-10 dataset? | Attack: PGD20, Attack: AutoAttack, Attack: DeepFool |
What metrics were used to measure the ConvTasnet and Dual Path Transformers model in the Harmonicity Plays a Critical Role in DNN Based Versus in Biologically-Inspired Monaural Speech Segregation Systems paper on the WSJ0-2mix dataset? | SDR |
What metrics were used to measure the FGSM model in the An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacks paper on the miniImageNet dataset? | Accuracy |
What metrics were used to measure the Feature Denoising model in the Feature Denoising for Improving Adversarial Robustness paper on the CAAD 2018 dataset? | Accuracy |
What metrics were used to measure the WRN-28-10 model in the Language Guided Adversarial Purification paper on the CIFAR-10 dataset? | Accuracy, Attack: AutoAttack, Robust Accuracy |
What metrics were used to measure the Stochastic-LWTA/PGD/WideResNet-34-10 model in the Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness paper on the CIFAR-10 dataset? | Accuracy, Attack: AutoAttack, Robust Accuracy |
What metrics were used to measure the Ours (Stochastic-LWTA/PGD/WideResNet-34-5) model in the Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness paper on the CIFAR-10 dataset? | Accuracy, Attack: AutoAttack, Robust Accuracy |
What metrics were used to measure the Ours (Stochastic-LWTA/PGD/WideResNet-34-1) model in the Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness paper on the CIFAR-10 dataset? | Accuracy, Attack: AutoAttack, Robust Accuracy |
What metrics were used to measure the PCL (against PGD, white box) model in the Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks paper on the CIFAR-10 dataset? | Accuracy, Attack: AutoAttack, Robust Accuracy |
What metrics were used to measure the Stochastic-LWTA/PGD/WideResNet-34-5 model in the Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness paper on the CIFAR-10 dataset? | Accuracy, Attack: AutoAttack, Robust Accuracy |
What metrics were used to measure the Auto Encoder-Block Switching defense with GradCAM model in the An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacks paper on the miniImageNet dataset? | Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.