prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the RNN-DFS model in the Relational Pooling for Graph Representations paper on the Tox21 dataset?
AUC
What metrics were used to measure the HierG2G model in the Hierarchical Graph-to-Graph Translation for Molecules paper on the QED dataset?
Diversity, Success
What metrics were used to measure the GLAM model in the An adaptive graph learning method for automated molecular interactions and properties predictions paper on the BBBP(scaffold) dataset?
AUC
What metrics were used to measure the GLAM model in the An adaptive graph learning method for automated molecular interactions and properties predictions paper on the Lipophilicity(scaffold) dataset?
RMSE
What metrics were used to measure the GLAM model in the An adaptive graph learning method for automated molecular interactions and properties predictions paper on the BACE(scaffold) dataset?
AUC
What metrics were used to measure the GLAM model in the An adaptive graph learning method for automated molecular interactions and properties predictions paper on the LIT-PCBA(ALDH1) dataset?
AUC
What metrics were used to measure the TransformerCPI model in the TransformerCPI: improving compound–protein interaction prediction by sequence-based deep learning with self-attention mechanism and label reversal experiments paper on the LIT-PCBA(ALDH1) dataset?
AUC
What metrics were used to measure the DGraphDTA model in the Drug–target affinity prediction using graph neural network and contact maps paper on the LIT-PCBA(ALDH1) dataset?
AUC
What metrics were used to measure the elEmBERT-V1 model in the Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties paper on the SIDER dataset?
AUC
What metrics were used to measure the Ensemble locally constant networks model in the Oblique Decision Trees from Derivatives of ReLU Networks paper on the SIDER dataset?
AUC
What metrics were used to measure the ContextPred model in the Strategies for Pre-training Graph Neural Networks paper on the SIDER dataset?
AUC
What metrics were used to measure the TrimNet model in the TrimNet: learning molecular representation from triplet messages for biomedicine paper on the BACE dataset?
AUC
What metrics were used to measure the Ensemble locally constant network model in the Oblique Decision Trees from Derivatives of ReLU Networks paper on the BACE dataset?
AUC
What metrics were used to measure the ProtoW-L2 model in the Optimal Transport Graph Neural Networks paper on the BACE dataset?
AUC
What metrics were used to measure the elEmBERT-V1 model in the Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties paper on the BACE dataset?
AUC
What metrics were used to measure the ContextPred model in the Strategies for Pre-training Graph Neural Networks paper on the BACE dataset?
AUC
What metrics were used to measure the ProtoW-L2 model in the Optimal Transport Graph Neural Networks paper on the BBBP dataset?
AUC
What metrics were used to measure the elEmBERT-V1 model in the Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties paper on the BBBP dataset?
AUC
What metrics were used to measure the ContextPred model in the Strategies for Pre-training Graph Neural Networks paper on the BBBP dataset?
AUC
What metrics were used to measure the TrimNet model in the TrimNet: learning molecular representation from triplet messages for biomedicine paper on the ToxCast dataset?
AUC
What metrics were used to measure the GraphConv + dummy super node model in the Learning Graph-Level Representation for Drug Discovery paper on the ToxCast dataset?
AUC
What metrics were used to measure the GraphConv model in the Convolutional Networks on Graphs for Learning Molecular Fingerprints paper on the ToxCast dataset?
AUC
What metrics were used to measure the ContextPred model in the Strategies for Pre-training Graph Neural Networks paper on the ToxCast dataset?
AUC
What metrics were used to measure the PAMNet model in the A universal framework for accurate and efficient geometric deep learning of molecular systems paper on the QM9 dataset?
Error ratio
What metrics were used to measure the MXMNet model in the Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures paper on the QM9 dataset?
Error ratio
What metrics were used to measure the SphereNet model in the Spherical Message Passing for 3D Graph Networks paper on the QM9 dataset?
Error ratio
What metrics were used to measure the ComENet model in the ComENet: Towards Complete and Efficient Message Passing for 3D Molecular Graphs paper on the QM9 dataset?
Error ratio
What metrics were used to measure the DimeNet++ model in the Fast and Uncertainty-Aware Directional Message Passing for Non-Equilibrium Molecules paper on the QM9 dataset?
Error ratio
What metrics were used to measure the PaiNN model in the Equivariant message passing for the prediction of tensorial properties and molecular spectra paper on the QM9 dataset?
Error ratio
What metrics were used to measure the DimeNet model in the Directional Message Passing for Molecular Graphs paper on the QM9 dataset?
Error ratio
What metrics were used to measure the DeepMoleNet model in the Transferable Multi-level Attention Neural Network for Accurate Prediction of Quantum Chemistry Properties via Multi-task Learning paper on the QM9 dataset?
Error ratio
What metrics were used to measure the MPNNs model in the Neural Message Passing for Quantum Chemistry paper on the QM9 dataset?
Error ratio
What metrics were used to measure the Gated Graph Sequence NN model in the Gated Graph Sequence Neural Networks paper on the QM9 dataset?
Error ratio
What metrics were used to measure the Molecular Graph Convolutions model in the Molecular Graph Convolutions: Moving Beyond Fingerprints paper on the QM9 dataset?
Error ratio
What metrics were used to measure the GLAM model in the An adaptive graph learning method for automated molecular interactions and properties predictions paper on the LIT-PCBA(ESR1_ant) dataset?
AUC
What metrics were used to measure the TransformerCPI model in the TransformerCPI: improving compound–protein interaction prediction by sequence-based deep learning with self-attention mechanism and label reversal experiments paper on the LIT-PCBA(ESR1_ant) dataset?
AUC
What metrics were used to measure the DGraphDTA model in the Drug–target affinity prediction using graph neural network and contact maps paper on the LIT-PCBA(ESR1_ant) dataset?
AUC
What metrics were used to measure the Multi-input Neural network with Attention model in the Attention-based Multi-Input Deep Learning Architecture for Biological Activity Prediction: An Application in EGFR Inhibitors paper on the egfr-inh dataset?
AUC
What metrics were used to measure the SMT-DTA model in the SSM-DTA: Breaking the Barriers of Data Scarcity in Drug-Target Affinity Prediction paper on the DAVIS-DTA dataset?
CI, MSE
What metrics were used to measure the DeepPurpose model in the DeepPurpose: a Deep Learning Library for Drug-Target Interaction Prediction paper on the DAVIS-DTA dataset?
CI, MSE
What metrics were used to measure the DeepDTA model in the DeepDTA: Deep Drug-Target Binding Affinity Prediction paper on the DAVIS-DTA dataset?
CI, MSE
What metrics were used to measure the GraphDTA model in the GraphDTA: prediction of drug–target binding affinity using graph convolutional networks paper on the DAVIS-DTA dataset?
CI, MSE
What metrics were used to measure the GLAM model in the An adaptive graph learning method for automated molecular interactions and properties predictions paper on the ESOL(scaffold) dataset?
RMSE
What metrics were used to measure the DeepDTA model in the DeepDTA: Deep Drug-Target Binding Affinity Prediction paper on the BindingDB IC50 dataset?
Pearson Correlation, RMSE
What metrics were used to measure the DeepAffinity model in the DeepAffinity: Interpretable Deep Learning of Compound-Protein Affinity through Unified Recurrent and Convolutional Neural Networks paper on the BindingDB IC50 dataset?
Pearson Correlation, RMSE
What metrics were used to measure the GLAM model in the An adaptive graph learning method for automated molecular interactions and properties predictions paper on the ToxCast(scaffold) dataset?
AUC
What metrics were used to measure the TF-Tensor-CNN model in the EEG Signal Dimensionality Reduction and Classification using Tensor Decomposition and Deep Convolutional Neural Networks paper on the CHB-MIT dataset?
Accuracy
What metrics were used to measure the ResNet+ LSTM model in the Real-Time Seizure Detection using EEG: A Comprehensive Comparison of Recent Approaches under a Realistic Setting paper on the TUH EEG Seizure Corpus dataset?
AUROC
What metrics were used to measure the CNN2D+LSTM model in the Real-Time Seizure Detection using EEG: A Comprehensive Comparison of Recent Approaches under a Realistic Setting paper on the TUH EEG Seizure Corpus dataset?
AUROC
What metrics were used to measure the CFS-BPNN model in the Curvature-based Feature Selection with Application in Classifying Electronic Health Records paper on the Diabetic Retinopathy Debrecen Data Set dataset?
Mean Accuracy
What metrics were used to measure the CORe model in the Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration paper on the Clinical Admission Notes from MIMIC-III dataset?
AUROC
What metrics were used to measure the BioBERT Base model in the Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration paper on the Clinical Admission Notes from MIMIC-III dataset?
AUROC
What metrics were used to measure the DenseNet-161 model in the BreastScreening: On the Use of Multi-Modality in Medical Imaging Diagnosis paper on the BreastDICOM4 dataset?
Average Precision, Average Recall
What metrics were used to measure the Hareesh model in the A probabilistic constrained clustering for transfer learning and image category discovery paper on the ngm dataset?
520
What metrics were used to measure the Gray-scale IMG CNN model in the Using Convolutional Neural Networks for Classification of Malware represented as Images paper on the Malimg Dataset dataset?
Accuracy (10-fold), Macro F1 (10-fold), Accuracy, Macro F1
What metrics were used to measure the GA Designed Deep CNN model in the Designing Deep Convolutional Neural Networks using a Genetic Algorithm for Image-based Malware Classification paper on the Malimg Dataset dataset?
Accuracy (10-fold), Macro F1 (10-fold), Accuracy, Macro F1
What metrics were used to measure the GRU + SVM model in the Towards Building an Intelligent Anti-Malware System: A Deep Learning Approach using Support Vector Machine (SVM) for Malware Classification paper on the Malimg Dataset dataset?
Accuracy (10-fold), Macro F1 (10-fold), Accuracy, Macro F1
What metrics were used to measure the FFNN + SVM model in the Towards Building an Intelligent Anti-Malware System: A Deep Learning Approach using Support Vector Machine (SVM) for Malware Classification paper on the Malimg Dataset dataset?
Accuracy (10-fold), Macro F1 (10-fold), Accuracy, Macro F1
What metrics were used to measure the CNN + SVM model in the Towards Building an Intelligent Anti-Malware System: A Deep Learning Approach using Support Vector Machine (SVM) for Malware Classification paper on the Malimg Dataset dataset?
Accuracy (10-fold), Macro F1 (10-fold), Accuracy, Macro F1
What metrics were used to measure the Ahmadi et al. (2016): ENT, Bytes 1-G, STR, IMG1, IMG2, MD1, MISC, OPC, SEC, REG, DP, API, SYM, MD2 IMG and Opcode N-Grams + Ensemble Learning (XGBoost) model in the HYDRA: A multimodal deep learning framework for malware classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the HYDRA model in the HYDRA: A multimodal deep learning framework for malware classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Zhang et al. (2016): Total lines of each Section, Operation Code Count, API Usage, Special Symbols Count, Asm File Pixel Intensity Feature, Bytes File Block Size Distribution, Bytes File N-Gram + Ensemble Learning (XGBoost) model in the HYDRA: A multimodal deep learning framework for malware classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Orthrus model in the Orthrus: A Bimodal Learning Architecture for Malware Classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Opcode-based Shallow CNN model in the Convolutional Neural Network for Classification of Malware Assembly Code paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Hierarchical Convolutional Network model in the A Hierarchical Convolutional Neural Network for Malware Classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the SEA model in the Sequential Embedding-based Attentive (SEA) classifier for malware classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Dynamic Time Wrapping + K-NN model in the Classification of Malware by Using Structural Entropy on Convolutional Neural Networks paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Ahmadi et al. (2016): API feature vector + XGBoost model in the HYDRA: A multimodal deep learning framework for malware classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Autoencoders+Residual Network model in the An End-to-End Deep Learning Architecture for Classification of Malware’s Binary Content paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Multiresolution CNN model in the Classification of Malware by Using Structural Entropy on Convolutional Neural Networks paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the CNN+BiLSTM model in the A Hierarchical Convolutional Neural Network for Malware Classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Scaled bytes sequence + CNN & Bidirectional LSTM model in the HYDRA: A multimodal deep learning framework for malware classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Grayscale images + Opcode N-grams (Feature selection for malware classification) model in the Orthrus: A Bimodal Learning Architecture for Malware Classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the DeepConv model in the A Hierarchical Convolutional Neural Network for Malware Classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Gray-scale IMG CNN model in the Using Convolutional Neural Networks for Classification of Malware represented as Images paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Hierarchical Attention Network model in the A Hierarchical Convolutional Neural Network for Malware Classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Structural entropy CNN model in the Classification of Malware by Using Structural Entropy on Convolutional Neural Networks paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Narayanan et al. (2016): PCA features + 1-NN model in the HYDRA: A multimodal deep learning framework for malware classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Deep Transferred Generative Adversarial Networks model in the Orthrus: A Bimodal Learning Architecture for Malware Classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Zero Rule Classifier model in the HYDRA: A multimodal deep learning framework for malware classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Random Guess Classifier model in the HYDRA: A multimodal deep learning framework for malware classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Multiresolution CNN + Bagging model in the Classification of Malware by Using Structural Entropy on Convolutional Neural Networks paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the MalConv model in the A Hierarchical Convolutional Neural Network for Malware Classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the CNN BiLSTM - Reb Sampl model in the Deep learning at the shallow end: Malware classification for non-domain experts paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the Haralick features + XGBoost model in the Using Convolutional Neural Networks for Classification of Malware represented as Images paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the LBP features + XGBoost model in the Using Convolutional Neural Networks for Classification of Malware represented as Images paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the GA Designed Deep CNN model in the Designing Deep Convolutional Neural Networks using a Genetic Algorithm for Image-based Malware Classification paper on the Microsoft Malware Classification Challenge dataset?
Accuracy (10-fold), LogLoss, Macro F1 (10-fold), Accuracy (5-fold), F1 score (5-fold), Accuracy
What metrics were used to measure the InvPT model in the InvPT: Inverted Pyramid Multi-task Transformer for Dense Scene Understanding paper on the PASCAL Context dataset?
max_F1
What metrics were used to measure the LDF(ours) model in the Label Decoupling Framework for Salient Object Detection paper on the HKU-IS dataset?
MAE, Fwβ, Sm, relaxFbβ, {max}Fβ
What metrics were used to measure the Pyramid Feature Attention model in the Pyramid Feature Attention Network for Saliency detection paper on the HKU-IS dataset?
MAE, Fwβ, Sm, relaxFbβ, {max}Fβ
What metrics were used to measure the U2-Net+ model in the U$^2$-Net: Going Deeper with Nested U-Structure for Salient Object Detection paper on the HKU-IS dataset?
MAE, Fwβ, Sm, relaxFbβ, {max}Fβ
What metrics were used to measure the Pyramid Feature Attention model in the Pyramid Feature Attention Network for Saliency detection paper on the ECSSD dataset?
MAE
What metrics were used to measure the PFAN [zhao2019pyramid] (+) PRN model in the PatchRefineNet: Improving Binary Segmentation by Incorporating Signals from Optimal Patch-wise Binarization paper on the DUTS-test dataset?
MAE
What metrics were used to measure the Pyramid Feature Attention model in the Pyramid Feature Attention Network for Saliency detection paper on the DUTS-test dataset?
MAE
What metrics were used to measure the Pyramid Feature Attention model in the Pyramid Feature Attention Network for Saliency detection paper on the DUT-OMRON dataset?
MAE, Fwβ, Sm, relaxFbβ, {max}Fβ
What metrics were used to measure the LDF(ours) model in the Label Decoupling Framework for Salient Object Detection paper on the DUT-OMRON dataset?
MAE, Fwβ, Sm, relaxFbβ, {max}Fβ
What metrics were used to measure the U2-Net model in the U$^2$-Net: Going Deeper with Nested U-Structure for Salient Object Detection paper on the DUT-OMRON dataset?
MAE, Fwβ, Sm, relaxFbβ, {max}Fβ
What metrics were used to measure the U2-Net+ model in the U$^2$-Net: Going Deeper with Nested U-Structure for Salient Object Detection paper on the DUT-OMRON dataset?
MAE, Fwβ, Sm, relaxFbβ, {max}Fβ
What metrics were used to measure the UCF model in the Learning Uncertain Convolutional Features for Accurate Saliency Detection paper on the DUT-OMRON dataset?
MAE, Fwβ, Sm, relaxFbβ, {max}Fβ
What metrics were used to measure the EYMOL model in the Variational Laws of Visual Attention for Dynamic Scenes paper on the CAT2000 dataset?
AUC, NSS