prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the FCDD model in the Explainable Deep One-Class Classification paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the IGD (pre-trained SSL) model in the Deep One-Class Classification via Interpolated Gaussian Descriptor paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the GAN based Anomaly Detection in Imbalance Problems model in the GAN-based Anomaly Detection in Imbalance Problems paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the SSOOD model in the Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the SSD model in the SSD: A Unified Framework for Self-Supervised Outlier Detection paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the UTAD model in the Unsupervised Two-Stage Anomaly Detection paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the GOAD model in the Classification-Based Anomaly Detection for General Data paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the ARNET model in the Attribute Restoration Framework for Anomaly Detection paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the Reverse Distillation model in the Anomaly Detection via Reverse Distillation from One-Class Embedding paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the ADT model in the Deep Anomaly Detection Using Geometric Transformations paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the IGD (pre-trained ImageNet) model in the Deep One-Class Classification via Interpolated Gaussian Descriptor paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the ESAD model in the ESAD: End-to-end Deep Semi-supervised Anomaly Detection paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the IGD (scratch) model in the Deep One-Class Classification via Interpolated Gaussian Descriptor paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the P-KDGAN model in the P-KDGAN: Progressive Knowledge Distillation with GANs for One-class Novelty Detection paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the OLED model in the OLED: One-Class Learned Encoder-Decoder Network with Adversarial Context Masking for Novelty Detection paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the OCGAN model in the OCGAN: One-class Novelty Detection Using GANs with Constrained Latent Representations paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the FastFlow model in the FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the DASVDD model in the DASVDD: Deep Autoencoding Support Vector Data Descriptor for Anomaly Detection paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the Deep SVDD model in the Deep One-Class Classification paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the Self-Supervised DeepSVDD model in the PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the Self-Supervised One-class SVM, RBF kernel model in the PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation paper on the One-class CIFAR-10 dataset? | AUROC |
What metrics were used to measure the EfficientAD-M model in the EfficientAD: Accurate Visual Anomaly Detection at Millisecond-Level Latencies paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the SLSG model in the SLSG: Industrial Image Anomaly Detection by Learning Better Feature Embeddings and One-Class Classification paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the ComAD+PatchCore model in the Component-aware anomaly detection framework for adjustable and logical industrial visual inspection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the EfficientAD-S model in the EfficientAD: Accurate Visual Anomaly Detection at Millisecond-Level Latencies paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the ComAD+AST model in the Component-aware anomaly detection framework for adjustable and logical industrial visual inspection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the ComAD+RD4AD model in the Component-aware anomaly detection framework for adjustable and logical industrial visual inspection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the ComAD+DRAEM model in the Component-aware anomaly detection framework for adjustable and logical industrial visual inspection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the SINBAD model in the Set Features for Fine-grained Anomaly Detection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the HETMM model in the Hard Nominal Example-aware Template Mutual Matching for Industrial Anomaly Detection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the DSKD model in the Contextual Affinity Distillation for Image Anomaly Detection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the DADF model in the Visual Anomaly Detection via Dual-Attention Transformer and Discriminative Flow paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the GCAD model in the Beyond Dents and Scratches: Logical Constraints in Unsupervised Anomaly Detection and Localization paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the GLCF model in the Learning Global-Local Correspondence with Semantic Bottleneck for Logical Anomaly Detection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the HETMM-F model in the Hard Nominal Example-aware Template Mutual Matching for Industrial Anomaly Detection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the HETMM-B model in the Hard Nominal Example-aware Template Mutual Matching for Industrial Anomaly Detection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the DSR model in the DSR -- A dual subspace re-projection network for surface anomaly detection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the ReContrast model in the ReContrast: Domain-Specific Anomaly Detection via Contrastive Reconstruction paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the ComAD model in the Component-aware anomaly detection framework for adjustable and logical industrial visual inspection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the PatchCore model in the Towards Total Recall in Industrial Anomaly Detection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the PatchCore Ensemble model in the Towards Total Recall in Industrial Anomaly Detection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the FastFlow model in the FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the RD4AD model in the Anomaly Detection via Reverse Distillation from One-Class Embedding paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the SimpleNet model in the SimpleNet: A Simple Network for Image Anomaly Detection and Localization paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the Student-Teacher model in the Uninformed Students: Student-Teacher Anomaly Detection with Discriminative Latent Embeddings paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the DRAEM model in the DRAEM -- A discriminatively trained reconstruction embedding for surface anomaly detection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the SPADE model in the Sub-Image Anomaly Detection with Deep Pyramid Correspondences paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the MNAD model in the Learning Memory-guided Normality for Anomaly Detection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the f-AnoGAN model in the f-AnoGAN: Fast Unsupervised Anomaly Detection with Generative Adversarial Networks paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the Variation Model model in the Beyond Dents and Scratches: Logical Constraints in Unsupervised Anomaly Detection and Localization paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the L2AE model in the Beyond Dents and Scratches: Logical Constraints in Unsupervised Anomaly Detection and Localization paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the VAE model in the Auto-Encoding Variational Bayes paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the AST model in the Asymmetric Student-Teacher Networks for Industrial Anomaly Detection paper on the MVTec LOCO AD dataset? | Avg. Detection AUROC, Detection AUROC (only logical), Detection AUROC (only structural), Segmentation AU-sPRO (until FPR 5%) |
What metrics were used to measure the PsudoLabels CLIP ViT model in the Out-of-Distribution Detection Without Class Labels paper on the Anomaly Detection on Anomaly Detection on Unlabeled ImageNet-30 vs Flowers-102 dataset? | ROC-AUC, Network |
What metrics were used to measure the CSI model in the CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances paper on the Anomaly Detection on Anomaly Detection on Unlabeled ImageNet-30 vs Flowers-102 dataset? | ROC-AUC, Network |
What metrics were used to measure the DN2 CLIP ViT model in the Deep Nearest Neighbor Anomaly Detection paper on the Anomaly Detection on Anomaly Detection on Unlabeled ImageNet-30 vs Flowers-102 dataset? | ROC-AUC, Network |
What metrics were used to measure the GOAD model in the Classification-Based Anomaly Detection for General Data paper on the Anomaly Detection on Anomaly Detection on Unlabeled ImageNet-30 vs Flowers-102 dataset? | ROC-AUC, Network |
What metrics were used to measure the ROT+Trans model in the Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty paper on the Anomaly Detection on Anomaly Detection on Unlabeled ImageNet-30 vs Flowers-102 dataset? | ROC-AUC, Network |
What metrics were used to measure the DualModel model in the Exploring Dual Model Knowledge Distillation for Anomaly Detection paper on the MVTEC AD textures dataset? | Detection AUROC |
What metrics were used to measure the Mixed-Teacher model in the MixedTeacher : Knowledge Distillation for fast inference textural anomaly detection paper on the MVTEC AD textures dataset? | Detection AUROC |
What metrics were used to measure the Reverse Distillation ++ model in the Revisiting Reverse Distillation for Anomaly Detection paper on the MVTEC AD textures dataset? | Detection AUROC |
What metrics were used to measure the BCE-Clip (OE) model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the One-class ImageNet-30 dataset? | AUROC |
What metrics were used to measure the CLIP (Zero Shot) model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the One-class ImageNet-30 dataset? | AUROC |
What metrics were used to measure the Binary Cross Entropy (OE) model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the One-class ImageNet-30 dataset? | AUROC |
What metrics were used to measure the CSI model in the CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances paper on the One-class ImageNet-30 dataset? | AUROC |
What metrics were used to measure the FCDD model in the Explainable Deep One-Class Classification paper on the One-class ImageNet-30 dataset? | AUROC |
What metrics were used to measure the RotNet + Translation + Self-Attention + Resize model in the Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty paper on the One-class ImageNet-30 dataset? | AUROC |
What metrics were used to measure the RotNet + Translation + Self-Attention model in the Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty paper on the One-class ImageNet-30 dataset? | AUROC |
What metrics were used to measure the RotNet + Self-Attention model in the Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty paper on the One-class ImageNet-30 dataset? | AUROC |
What metrics were used to measure the RotNet + Translation model in the Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty paper on the One-class ImageNet-30 dataset? | AUROC |
What metrics were used to measure the RotNet model in the Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty paper on the One-class ImageNet-30 dataset? | AUROC |
What metrics were used to measure the Supervised (OE) model in the Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty paper on the One-class ImageNet-30 dataset? | AUROC |
What metrics were used to measure the CDO model in the Collaborative Discrepancy Optimization for Reliable Image Anomaly Localization paper on the MVTEC 3D-AD dataset? | Segmentation AUPRO, Detection AUROC, Segmentation AUROC |
What metrics were used to measure the AST model in the Asymmetric Student-Teacher Networks for Industrial Anomaly Detection paper on the MVTEC 3D-AD dataset? | Segmentation AUPRO, Detection AUROC, Segmentation AUROC |
What metrics were used to measure the RPL+CoroCL model in the Residual Pattern Learning for Pixel-wise Out-of-Distribution Detection in Semantic Segmentation paper on the Fishyscapes dataset? | AP, FPR95 |
What metrics were used to measure the Mask2Anomaly model in the Unmasking Anomalies in Road-Scene Segmentation paper on the Fishyscapes dataset? | AP, FPR95 |
What metrics were used to measure the PEBAL model in the Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Complex Urban Driving Scenes paper on the Fishyscapes dataset? | AP, FPR95 |
What metrics were used to measure the Synboost model in the Pixel-wise Anomaly Detection in Complex Driving Scenes paper on the Fishyscapes dataset? | AP, FPR95 |
What metrics were used to measure the DenseHybrid model in the DenseHybrid: Hybrid Anomaly Detection for Dense Open-set Recognition paper on the Fishyscapes dataset? | AP, FPR95 |
What metrics were used to measure the FlowEneDet model in the Concurrent Misclassification and Out-of-Distribution Detection for Semantic Segmentation via Energy-Based Normalizing Flow paper on the Fishyscapes dataset? | AP, FPR95 |
What metrics were used to measure the SML model in the Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation paper on the Fishyscapes dataset? | AP, FPR95 |
What metrics were used to measure the Bayesian DeepLab model in the Evaluating Bayesian Deep Learning Methods for Semantic Segmentation paper on the Fishyscapes dataset? | AP, FPR95 |
What metrics were used to measure the PANDA model in the PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation paper on the Cats-and-Dogs dataset? | ROC AUC |
What metrics were used to measure the PANDA-OE model in the PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation paper on the Cats-and-Dogs dataset? | ROC AUC |
What metrics were used to measure the Self-Supervised One-class SVM, RBF kernel model in the PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation paper on the Cats-and-Dogs dataset? | ROC AUC |
What metrics were used to measure the Self-Supervised DeepSVDD model in the PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation paper on the Cats-and-Dogs dataset? | ROC AUC |
What metrics were used to measure the BCE-CLIP (OE) model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the Leave-One-Class-Out ImageNet-30 dataset? | AUROC |
What metrics were used to measure the CLIP (zero shot) model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the Leave-One-Class-Out ImageNet-30 dataset? | AUROC |
What metrics were used to measure the DSAD model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the Leave-One-Class-Out ImageNet-30 dataset? | AUROC |
What metrics were used to measure the HSC (OE) model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the Leave-One-Class-Out ImageNet-30 dataset? | AUROC |
What metrics were used to measure the Binary Cross Entropy (OE) model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the Leave-One-Class-Out ImageNet-30 dataset? | AUROC |
What metrics were used to measure the DSVDD model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the Leave-One-Class-Out ImageNet-30 dataset? | AUROC |
What metrics were used to measure the AMSRC model in the A Video Anomaly Detection Framework based on Appearance-Motion Semantics Representation Consistency paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the SSMTL++v1 model in the SSMTL++: Revisiting Self-Supervised Multi-Task Learning for Video Anomaly Detection paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the AI-VAD model in the Attribute-based Representations for Accurate and Interpretable Video Anomaly Detection paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Background-Agnostic Framework+SSMCTB model in the Self-Supervised Masked Convolutional Transformer Block for Anomaly Detection paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the SSMTL+UBnormal model in the UBnormal: New Benchmark for Supervised Open-Set Video Anomaly Detection paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Background- Agnostic Framework+SSPCAB model in the Self-Supervised Predictive Convolutional Attentive Block for Anomaly Detection paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the DMAD model in the Diversity-Measurable Anomaly Detection paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Background-Agnostic Framework model in the A Background-Agnostic Framework with Adversarial Training for Abnormal Event Detection in Video paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.