prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the ResNet-152 Denoise model in the Feature Denoising for Improving Adversarial Robustness paper on the ImageNet (targeted PGD, max perturbation=16) dataset? | Accuracy |
What metrics were used to measure the ResNeXt-101 DenoiseAll model in the Feature Denoising for Improving Adversarial Robustness paper on the ImageNet (targeted PGD, max perturbation=16) dataset? | Accuracy |
What metrics were used to measure the ResNet-152 model in the Feature Denoising for Improving Adversarial Robustness paper on the ImageNet (targeted PGD, max perturbation=16) dataset? | Accuracy |
What metrics were used to measure the wideresnet-34-20 model in the Learnable Boundary Guided Adversarial Training paper on the CIFAR-100 dataset? | autoattack |
What metrics were used to measure the wideresnet-34-10 model in the Learnable Boundary Guided Adversarial Training paper on the CIFAR-100 dataset? | autoattack |
What metrics were used to measure the Cassandra model in the Cassandra: Detecting Trojaned Networks from Adversarial Perturbations paper on the TrojAI Round 0 dataset? | Detection Accuracy |
What metrics were used to measure the SAT-EfficientNet-L1 model in the Smooth Adversarial Training paper on the ImageNet (non-targeted PGD, max perturbation=4) dataset? | Accuracy |
What metrics were used to measure the LLR-ResNet-152 model in the Adversarial Robustness through Local Linearization paper on the ImageNet (non-targeted PGD, max perturbation=4) dataset? | Accuracy |
What metrics were used to measure the ResNet-152 free-m=4 model in the Adversarial Training for Free! paper on the ImageNet (non-targeted PGD, max perturbation=4) dataset? | Accuracy |
What metrics were used to measure the ResNet-101 free-m=4 model in the Adversarial Training for Free! paper on the ImageNet (non-targeted PGD, max perturbation=4) dataset? | Accuracy |
What metrics were used to measure the ResNet-50 free-m=4 model in the Adversarial Training for Free! paper on the ImageNet (non-targeted PGD, max perturbation=4) dataset? | Accuracy |
What metrics were used to measure the Cassandra model in the Cassandra: Detecting Trojaned Networks from Adversarial Perturbations paper on the TrojAI Round 1 dataset? | Detection Accuracy |
What metrics were used to measure the ResNet101 model in the NOMARO: Defending against Adversarial Attacks by NOMA-Inspired Reconstruction Operation paper on the ImageNet dataset? | Accuracy, Robust Accuracy |
What metrics were used to measure the InceptionV3 model in the NOMARO: Defending against Adversarial Attacks by NOMA-Inspired Reconstruction Operation paper on the ImageNet dataset? | Accuracy, Robust Accuracy |
What metrics were used to measure the ResNet-50 model in the Language Guided Adversarial Purification paper on the ImageNet dataset? | Accuracy, Robust Accuracy |
What metrics were used to measure the Feature Denoising model in the Feature Denoising for Improving Adversarial Robustness paper on the ImageNet dataset? | Accuracy, Robust Accuracy |
What metrics were used to measure the graspnet-baseline model in the GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping paper on the GraspNet-1Billion dataset? | AP |
What metrics were used to measure the RA-GraspNet (GraspNet with Rotation Anchor) model in the NBMOD: Find It and Grasp It in Noisy Background paper on the NBMOD dataset? | Acc |
What metrics were used to measure the Efficient-Grasping model in the Lightweight Convolutional Neural Network with Gaussian-based Grasping Representation for Robotic Grasping Detection paper on the Jacquard dataset dataset? | Accuracy (%) |
What metrics were used to measure the GR-ConvNet model in the Antipodal Robotic Grasping using Generative Residual Convolutional Neural Network paper on the Jacquard dataset dataset? | Accuracy (%) |
What metrics were used to measure the grasp_det_seg_cnn (rgb only) model in the End-to-end Trainable Deep Neural Network for Robotic Grasp Detection and Semantic Segmentation from RGB paper on the Jacquard dataset dataset? | Accuracy (%) |
What metrics were used to measure the grasp_det_seg_cnn (rgb only, IW split) model in the End-to-end Trainable Deep Neural Network for Robotic Grasp Detection and Semantic Segmentation from RGB paper on the Cornell Grasp Dataset dataset? | 5 fold cross validation |
What metrics were used to measure the GR-ConvNet model in the Antipodal Robotic Grasping using Generative Residual Convolutional Neural Network paper on the Cornell Grasp Dataset dataset? | 5 fold cross validation |
What metrics were used to measure the ResNet50 multi-grasp predictor model in the Real-world multiobject, multigrasp detection paper on the Cornell Grasp Dataset dataset? | 5 fold cross validation |
What metrics were used to measure the Multi-Modal Grasp Predictor model in the Robotic Grasp Detection using Deep Convolutional Neural Networks paper on the Cornell Grasp Dataset dataset? | 5 fold cross validation |
What metrics were used to measure the AlexNet, MultiGrasp model in the Real-Time Grasp Detection Using Convolutional Neural Networks paper on the Cornell Grasp Dataset dataset? | 5 fold cross validation |
What metrics were used to measure the GGCNN model in the Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach paper on the Cornell Grasp Dataset dataset? | 5 fold cross validation |
What metrics were used to measure the Fast Search model in the Deep Learning for Detecting Robotic Grasps paper on the Cornell Grasp Dataset dataset? | 5 fold cross validation |
What metrics were used to measure the SLEEPER-GBT model in the SLEEPER: interpretable Sleep staging via Prototypes from Expert Rules paper on the ISRUC-Sleep dataset? | AUROC, Accuracy, Kappa |
What metrics were used to measure the multi-head attention model in the An Attention-Based Deep Learning Approach for Sleep Stage Classification With Single-Channel EEG paper on the Sleep-EDF dataset? | Accuracy, Cohen’s Kappa score, Number of parameters (M) |
What metrics were used to measure the Sequence Cross-Modal Transformer-15 model in the Towards Interpretable Sleep Stage Classification Using Cross-Modal Transformers paper on the Sleep-EDF dataset? | Accuracy, Cohen’s Kappa score, Number of parameters (M) |
What metrics were used to measure the TS-TCC model in the Time-Series Representation Learning via Temporal and Contextual Contrasting paper on the Sleep-EDF dataset? | Accuracy, Cohen’s Kappa score, Number of parameters (M) |
What metrics were used to measure the Epoch Cross-Modal Transformer model in the Towards Interpretable Sleep Stage Classification Using Cross-Modal Transformers paper on the Sleep-EDF dataset? | Accuracy, Cohen’s Kappa score, Number of parameters (M) |
What metrics were used to measure the SheetCopilot (NIPS2023) model in the SheetCopilot: Bringing Software Productivity to the Next Level through Large Language Models paper on the SheetCopilot dataset? | Pass@1 |
What metrics were used to measure the PackNN model in the PackIt: A Virtual Environment for Geometric Planning paper on the PackIt dataset? | Average Reward |
What metrics were used to measure the Heuristic Largest First-Aligned-BLBF model in the PackIt: A Virtual Environment for Geometric Planning paper on the PackIt dataset? | Average Reward |
What metrics were used to measure the Heuristic Largest First-Aligned-Random model in the PackIt: A Virtual Environment for Geometric Planning paper on the PackIt dataset? | Average Reward |
What metrics were used to measure the Heuristic Random-Aligned-BLBF model in the PackIt: A Virtual Environment for Geometric Planning paper on the PackIt dataset? | Average Reward |
What metrics were used to measure the Attention-driven Robotic Manipulation (ARM) model in the Q-attention: Enabling Efficient Learning for Vision-based Robotic Manipulation paper on the RLBench dataset? | Success Rate |
What metrics were used to measure the DocIE w transformer model in the DocOIE: A Document-level Context-Aware Dataset for OpenIE paper on the DocOIE-transportation dataset? | F1 |
What metrics were used to measure the Reverb model in the DocOIE: A Document-level Context-Aware Dataset for OpenIE paper on the DocOIE-transportation dataset? | F1 |
What metrics were used to measure the DeepEx (zero-shot) model in the Zero-Shot Information Extraction as a Unified Text-to-Triple Translation paper on the OIE2016 dataset? | F1, AUC |
What metrics were used to measure the DeepStruct multi-task w/ finetune model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the OIE2016 dataset? | F1, AUC |
What metrics were used to measure the Deepstruct multi-task model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the OIE2016 dataset? | F1, AUC |
What metrics were used to measure the SpanOIE [48] model in the A Survey on Neural Open Information Extraction: Current Status and Future Directions paper on the OIE2016 dataset? | F1, AUC |
What metrics were used to measure the SpanOIE model in the Span Model for Open Information Extraction on Accurate Corpus paper on the OIE2016 dataset? | F1, AUC |
What metrics were used to measure the LLaMA-2-70B w/ Selected Demo & Uncertainty model in the Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty paper on the OIE2016 dataset? | F1, AUC |
What metrics were used to measure the GPT-3.5-Turbo w/ Selected Demo & Uncertainty model in the Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty paper on the OIE2016 dataset? | F1, AUC |
What metrics were used to measure the RnnOIE [36] model in the A Survey on Neural Open Information Extraction: Current Status and Future Directions paper on the OIE2016 dataset? | F1, AUC |
What metrics were used to measure the OpenIE4 [26] model in the A Survey on Neural Open Information Extraction: Current Status and Future Directions paper on the OIE2016 dataset? | F1, AUC |
What metrics were used to measure the ClausIE [9] model in the A Survey on Neural Open Information Extraction: Current Status and Future Directions paper on the OIE2016 dataset? | F1, AUC |
What metrics were used to measure the LLaMA-2-13B w/ Selected Demo & Uncertainty model in the Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty paper on the OIE2016 dataset? | F1, AUC |
What metrics were used to measure the Deepstruct zero-shot model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the OIE2016 dataset? | F1, AUC |
What metrics were used to measure the Deepstruct zero-shot model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the Web dataset? | F1, AUC |
What metrics were used to measure the DeepStruct multi-task w/ finetune model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the Web dataset? | F1, AUC |
What metrics were used to measure the DeepStruct multi-task model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the Web dataset? | F1, AUC |
What metrics were used to measure the DeepEx (zero-shot) model in the Zero-Shot Information Extraction as a Unified Text-to-Triple Translation paper on the Web dataset? | F1, AUC |
What metrics were used to measure the Deepstruct zero-shot model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the NYT dataset? | F1, AUC |
What metrics were used to measure the DeepStruct multi-task model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the NYT dataset? | F1, AUC |
What metrics were used to measure the DeepStruct multi-task w/ finetune model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the NYT dataset? | F1, AUC |
What metrics were used to measure the DeepEx (zero-shot) model in the Zero-Shot Information Extraction as a Unified Text-to-Triple Translation paper on the NYT dataset? | F1, AUC |
What metrics were used to measure the SMiLe-OIE model in the Syntactic Multi-view Learning for Open Information Extraction paper on the LSOIE-wiki dataset? | F1 |
What metrics were used to measure the BERT + Dep-GCN - Const-GCN model in the Syntactic Multi-view Learning for Open Information Extraction paper on the LSOIE-wiki dataset? | F1 |
What metrics were used to measure the BERT + Dep-GCN [?] Const-GCN model in the Syntactic Multi-view Learning for Open Information Extraction paper on the LSOIE-wiki dataset? | F1 |
What metrics were used to measure the BERT + Const-GCN model in the Syntactic Multi-view Learning for Open Information Extraction paper on the LSOIE-wiki dataset? | F1 |
What metrics were used to measure the IMoJIE Kolluru et al. (2020) model in the Syntactic Multi-view Learning for Open Information Extraction paper on the LSOIE-wiki dataset? | F1 |
What metrics were used to measure the BERT + Dep-GCN model in the Syntactic Multi-view Learning for Open Information Extraction paper on the LSOIE-wiki dataset? | F1 |
What metrics were used to measure the BERT Solawetz and Larson (2021) model in the Syntactic Multi-view Learning for Open Information Extraction paper on the LSOIE-wiki dataset? | F1 |
What metrics were used to measure the CIGL-OIE + IGL-CA Kolluru et al. (2020) model in the Syntactic Multi-view Learning for Open Information Extraction paper on the LSOIE-wiki dataset? | F1 |
What metrics were used to measure the GloVe + bi-LSTM + CRF model in the Syntactic Multi-view Learning for Open Information Extraction paper on the LSOIE-wiki dataset? | F1 |
What metrics were used to measure the GloVe + bi-LSTM Stanovsky et al. (2018) model in the Syntactic Multi-view Learning for Open Information Extraction paper on the LSOIE-wiki dataset? | F1 |
What metrics were used to measure the CopyAttention Cui et al. (2018) model in the Syntactic Multi-view Learning for Open Information Extraction paper on the LSOIE-wiki dataset? | F1 |
What metrics were used to measure the Deepstruct zero-shot model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the Penn Treebank dataset? | F1, AUC |
What metrics were used to measure the DeepStruct multi-task model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the Penn Treebank dataset? | F1, AUC |
What metrics were used to measure the DeepEx (zero-shot) model in the Zero-Shot Information Extraction as a Unified Text-to-Triple Translation paper on the Penn Treebank dataset? | F1, AUC |
What metrics were used to measure the DeepStruct multi-task w/ finetune model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the Penn Treebank dataset? | F1, AUC |
What metrics were used to measure the ClausIE model in the BenchIE: A Framework for Multi-Faceted Fact-Based Open Information Extraction Evaluation paper on the BenchIE dataset? | Precision, F1, Recall |
What metrics were used to measure the MinIE model in the BenchIE: A Framework for Multi-Faceted Fact-Based Open Information Extraction Evaluation paper on the BenchIE dataset? | Precision, F1, Recall |
What metrics were used to measure the CompactIE model in the CompactIE: Compact Facts in Open Information Extraction paper on the BenchIE dataset? | Precision, F1, Recall |
What metrics were used to measure the M2OIE (EN) model in the BenchIE: A Framework for Multi-Faceted Fact-Based Open Information Extraction Evaluation paper on the BenchIE dataset? | Precision, F1, Recall |
What metrics were used to measure the ROIE-T model in the BenchIE: A Framework for Multi-Faceted Fact-Based Open Information Extraction Evaluation paper on the BenchIE dataset? | Precision, F1, Recall |
What metrics were used to measure the OpenIE6 model in the BenchIE: A Framework for Multi-Faceted Fact-Based Open Information Extraction Evaluation paper on the BenchIE dataset? | Precision, F1, Recall |
What metrics were used to measure the M2OIE (ZH) model in the BenchIE: A Framework for Multi-Faceted Fact-Based Open Information Extraction Evaluation paper on the BenchIE dataset? | Precision, F1, Recall |
What metrics were used to measure the ROIE-N model in the BenchIE: A Framework for Multi-Faceted Fact-Based Open Information Extraction Evaluation paper on the BenchIE dataset? | Precision, F1, Recall |
What metrics were used to measure the Stanford OIE model in the BenchIE: A Framework for Multi-Faceted Fact-Based Open Information Extraction Evaluation paper on the BenchIE dataset? | Precision, F1, Recall |
What metrics were used to measure the M2OIE (DE) model in the BenchIE: A Framework for Multi-Faceted Fact-Based Open Information Extraction Evaluation paper on the BenchIE dataset? | Precision, F1, Recall |
What metrics were used to measure the Naive OIE model in the BenchIE: A Framework for Multi-Faceted Fact-Based Open Information Extraction Evaluation paper on the BenchIE dataset? | Precision, F1, Recall |
What metrics were used to measure the CIGL-OIE + IGL-CA (OpenIE6) model in the OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction paper on the WiRe57 dataset? | F1 |
What metrics were used to measure the CIGL-OIE model in the OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction paper on the WiRe57 dataset? | F1 |
What metrics were used to measure the IMoJIE model in the OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction paper on the WiRe57 dataset? | F1 |
What metrics were used to measure the MinIE Gashteovski et al. (2017) model in the WiRe57 : A Fine-Grained Benchmark for Open Information Extraction paper on the WiRe57 dataset? | F1 |
What metrics were used to measure the OpenIE5 model in the OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction paper on the WiRe57 dataset? | F1 |
What metrics were used to measure the IGL-OIE model in the OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction paper on the WiRe57 dataset? | F1 |
What metrics were used to measure the OpenIE4 model in the OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction paper on the WiRe57 dataset? | F1 |
What metrics were used to measure the ClausIE Del Corro and Gemulla (2013) model in the WiRe57 : A Fine-Grained Benchmark for Open Information Extraction paper on the WiRe57 dataset? | F1 |
What metrics were used to measure the ClausIE model in the OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction paper on the WiRe57 dataset? | F1 |
What metrics were used to measure the SpanOIE model in the OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction paper on the WiRe57 dataset? | F1 |
What metrics were used to measure the MinIE model in the OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction paper on the WiRe57 dataset? | F1 |
What metrics were used to measure the OpenIE 4 Mausam (2016) model in the WiRe57 : A Fine-Grained Benchmark for Open Information Extraction paper on the WiRe57 dataset? | F1 |
What metrics were used to measure the RnnOIE model in the OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction paper on the WiRe57 dataset? | F1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.