prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the SSMTL++v2 model in the paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the SSMTL model in the Anomaly Detection in Video via Self-Supervised and Multi-Task Learning paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Two-stream model in the Context Recovery and Knowledge Retrieval: A Novel Two-Stream Framework for Video Anomaly Detection paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the SD-MAE model in the Self-Distilled Masked Auto-Encoders are Efficient Video Anomaly Detectors paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Cloze Test model in the Cloze Test Helps: Effective Video Anomaly Detection via Learning to Complete Video Events paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Narrowed Normality Clusters model in the Detecting abnormal events in video using Narrowed Normality Clusters paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the AKD-VAD model in the Lightning Fast Video Anomaly Detection via Adversarial Knowledge Distillation paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Object-centric AE model in the Object-centric Auto-encoders and Dummy Anomalies for Abnormal Event Detection in Video paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Siamese Net model in the Learning a distance function with a Siamese network to localize anomalies in videos paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Learning not to reconstruct model in the Learning Not to Reconstruct Anomalies paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Appearance-Motion Correspondence model in the Anomaly Detection in Video Sequence with Appearance-Motion Correspondence paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the ASTNet model in the Attention-based residual autoencoder for video anomaly detection paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Any-Shot Sequential model in the Any-Shot Sequential Anomaly Detection in Surveillance Videos paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the EVAL model in the EVAL: Explainable Video Anomaly Localization paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the FastAno model in the FastAno: Fast Anomaly Detection via Spatio-temporal Patch Transformation paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Future Frame Prediction model in the Future Frame Prediction for Anomaly Detection -- A New Baseline paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the ConvVQ model in the Diversity-Measurable Anomaly Detection paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Sparse Coding Stacked RNN model in the A Revisit of Sparse Coding Based Anomaly Detection in Stacked RNN Framework paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the Unmasking model in the Unmasking the abnormal events in video paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the HF2VAD+SSPCAB model in the Self-Supervised Predictive Convolutional Attentive Block for Anomaly Detection paper on the CUHK Avenue dataset? | AUC, RBDC, TBDC, FPS |
What metrics were used to measure the CPR model in the Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval paper on the MVTec 3D-AD (RGB) dataset? | Detection AUROC, Segmentation AP, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the Two-stream model in the Context Recovery and Knowledge Retrieval: A Novel Two-Stream Framework for Video Anomaly Detection paper on the Corridor dataset? | AUC |
What metrics were used to measure the Multi-timescale Prediction model in the Multi-timescale Trajectory Prediction for Abnormal Human Activity Detection paper on the Corridor dataset? | AUC |
What metrics were used to measure the AttentDifferNet (SENet-AlexNet) model in the Attention Modules Improve Image-Level Anomaly Detection for Industrial Inspection: A DifferNet Case Study paper on the InsPLAD dataset? | Detection AUROC |
What metrics were used to measure the DifferNet model in the Same Same But DifferNet: Semi-Supervised Defect Detection with Normalizing Flows paper on the InsPLAD dataset? | Detection AUROC |
What metrics were used to measure the RD++ (CBAM-ResNet-18) model in the Attention Modules Improve Image-Level Anomaly Detection for Industrial Inspection: A DifferNet Case Study paper on the InsPLAD dataset? | Detection AUROC |
What metrics were used to measure the RD++ (SENet-ResNet-18) model in the Attention Modules Improve Image-Level Anomaly Detection for Industrial Inspection: A DifferNet Case Study paper on the InsPLAD dataset? | Detection AUROC |
What metrics were used to measure the RD++ (ResNet-18) model in the Revisiting Reverse Distillation for Anomaly Detection paper on the InsPLAD dataset? | Detection AUROC |
What metrics were used to measure the cDNP+OE model in the Far Away in the Deep Space: Dense Nearest-Neighbor-Based Out-of-Distribution Detection paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the NFlowJS-GF (with extra inlier set: Vistas and Wilddash2) model in the Dense Out-of-Distribution Detection by Robust Learning on Synthetic Negative Data paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the cDNP model in the Far Away in the Deep Space: Dense Nearest-Neighbor-Based Out-of-Distribution Detection paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the RPL+CoroCL model in the Residual Pattern Learning for Pixel-wise Out-of-Distribution Detection in Semantic Segmentation paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the FlowEneDet model in the Concurrent Misclassification and Out-of-Distribution Detection for Semantic Segmentation via Energy-Based Normalizing Flow paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the Mask2Anomaly model in the Unmasking Anomalies in Road-Scene Segmentation paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the PEBAL model in the Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Complex Urban Driving Scenes paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the DenseHybrid model in the DenseHybrid: Hybrid Anomaly Detection for Dense Open-set Recognition paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the SynBoost model in the Pixel-wise Anomaly Detection in Complex Driving Scenes paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the CosMe model in the Consensus Synergizes with Memory: A Simple Approach for Anomaly Segmentation in Urban Scenes paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the NFlow model in the Dense Out-of-Distribution Detection by Robust Learning on Synthetic Negative Data paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the SML model in the Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the Dirichlet DeepLab model in the The Fishyscapes Benchmark: Measuring Blind Spots in Semantic Segmentation paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the OutlierHead combined instances model in the Simultaneous Semantic Segmentation and Outlier Detection in Presence of Domain Shift paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the Void Classifier model in the The Fishyscapes Benchmark: Measuring Blind Spots in Semantic Segmentation paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the Bayesian DeepLab model in the The Fishyscapes Benchmark: Measuring Blind Spots in Semantic Segmentation paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the Learned Embedding Density model in the The Fishyscapes Benchmark: Measuring Blind Spots in Semantic Segmentation paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the Softmax Entropy model in the The Fishyscapes Benchmark: Measuring Blind Spots in Semantic Segmentation paper on the Fishyscapes L&F dataset? | AP, FPR95 |
What metrics were used to measure the Shell-based Anomaly (supervisered) model in the Shell Theory: A Statistical Model of Reality paper on the ASSIRA Cat Vs Dog dataset? | ROC AUC |
What metrics were used to measure the CCD model in the Constrained Contrastive Distribution Learning for Unsupervised Anomaly Detection and Localisation in Medical Images paper on the Hyper-Kvasir Dataset dataset? | AUC |
What metrics were used to measure the IGD model in the Deep One-Class Classification via Interpolated Gaussian Descriptor paper on the Hyper-Kvasir Dataset dataset? | AUC |
What metrics were used to measure the PANDA model in the PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation paper on the Hyper-Kvasir Dataset dataset? | AUC |
What metrics were used to measure the PaDiM model in the PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and Localization paper on the Hyper-Kvasir Dataset dataset? | AUC |
What metrics were used to measure the F-Anogan model in the f-AnoGAN: Fast Unsupervised Anomaly Detection with Generative Adversarial Networks paper on the Hyper-Kvasir Dataset dataset? | AUC |
What metrics were used to measure the OCGAN model in the OCGAN: One-class Novelty Detection Using GANs with Constrained Latent Representations paper on the Hyper-Kvasir Dataset dataset? | AUC |
What metrics were used to measure the RCALAD model in the Spot The Odd One Out: Regularized Complete Cycle Consistent Anomaly Detector GAN paper on the KDD Cup 1999 dataset? | F1-Score |
What metrics were used to measure the EfficientAD-M model in the EfficientAD: Accurate Visual Anomaly Detection at Millisecond-Level Latencies paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the DiffusionAD model in the DiffusionAD: Denoising Diffusion for Anomaly Detection paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the EfficientAD-S model in the EfficientAD: Accurate Visual Anomaly Detection at Millisecond-Level Latencies paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the DDAD model in the Anomaly Detection with Conditioned Denoising Diffusion Models paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the FAIRnoDTD model in the FAIR: Frequency-aware Image Restoration for Industrial Visual Anomaly Detection paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the EdgRec model in the Reconstruction from edge image combined with color and gradient difference for industrial surface anomaly detection paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the TransFusion model in the TransFusion -- A Transparency-Based Diffusion Model for Anomaly Detection paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the WinCLIP+ (4-shot) model in the WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the WinCLIP+ (2-shot) model in the WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the PaDiM model in the PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and Localization paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the WinCLIP+ (1-shot) model in the WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the GCAD model in the Beyond Dents and Scratches: Logical Constraints in Unsupervised Anomaly Detection and Localization paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the AST model in the Asymmetric Student-Teacher Networks for Industrial Anomaly Detection paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the DRAEM model in the DRAEM -- A discriminatively trained reconstruction embedding for surface anomaly detection paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the Reverse Distillation model in the Anomaly Detection via Reverse Distillation from One-Class Embedding paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the DSR model in the DSR -- A dual subspace re-projection network for surface anomaly detection paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the FAVAE model in the Anomaly localization by modeling perceptual features paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the SPADE model in the Sub-Image Anomaly Detection with Deep Pyramid Correspondences paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the STPM model in the Student-Teacher Feature Pyramid Matching for Anomaly Detection paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the AnoDDPM model in the AnoDDPM: Anomaly Detection With Denoising Diffusion Probabilistic Models Using Simplex Noise paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the FastFlow model in the FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the WinCLIP (0-shot) model in the WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the CFA model in the CFA: Coupled-hypersphere-based Feature Adaptation for Target-Oriented Anomaly Localization paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the ReContrast model in the ReContrast: Domain-Specific Anomaly Detection via Contrastive Reconstruction paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the PatchCore model in the Towards Total Recall in Industrial Anomaly Detection paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the Student-Teacher model in the Uninformed Students: Student-Teacher Anomaly Detection with Discriminative Latent Embeddings paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the CFLOW model in the CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the SPD model in the SPot-the-Difference Self-Supervised Pre-training for Anomaly Detection and Segmentation paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the CutPaste model in the CutPaste: Self-Supervised Learning for Anomaly Detection and Localization paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the SAA+ model in the Segment Any Anomaly without Training via Hybrid Prompt Regularization paper on the VisA dataset? | Segmentation AUPRO (until 30% FPR), Detection AUROC, F1-Score, Segmentation AUPRO, Segmentation AUROC |
What metrics were used to measure the Deep SVDD model in the TII-SSRC-23 Dataset: Typological Exploration of Diverse Traffic Patterns for Intrusion Detection paper on the TII-SSRC-23 dataset? | AUC |
What metrics were used to measure the PsudoLabels ViT model in the Out-of-Distribution Detection Without Class Labels paper on the Unlabeled CIFAR-10 vs CIFAR-100 dataset? | AUROC, Network |
What metrics were used to measure the PsudoLabels ResNet-152 model in the Out-of-Distribution Detection Without Class Labels paper on the Unlabeled CIFAR-10 vs CIFAR-100 dataset? | AUROC, Network |
What metrics were used to measure the PsudoLabels ResNet-18 model in the Out-of-Distribution Detection Without Class Labels paper on the Unlabeled CIFAR-10 vs CIFAR-100 dataset? | AUROC, Network |
What metrics were used to measure the SCAN Features model in the Out-of-Distribution Detection Without Class Labels paper on the Unlabeled CIFAR-10 vs CIFAR-100 dataset? | AUROC, Network |
What metrics were used to measure the MeanShifted model in the Mean-Shifted Contrastive Loss for Anomaly Detection paper on the Unlabeled CIFAR-10 vs CIFAR-100 dataset? | AUROC, Network |
What metrics were used to measure the SSD model in the SSD: A Unified Framework for Self-Supervised Outlier Detection paper on the Unlabeled CIFAR-10 vs CIFAR-100 dataset? | AUROC, Network |
What metrics were used to measure the CSI model in the CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances paper on the Unlabeled CIFAR-10 vs CIFAR-100 dataset? | AUROC, Network |
What metrics were used to measure the GOAD model in the Classification-Based Anomaly Detection for General Data paper on the Unlabeled CIFAR-10 vs CIFAR-100 dataset? | AUROC, Network |
What metrics were used to measure the MTL model in the Shifting Transformation Learning for Out-of-Distribution Detection paper on the Unlabeled CIFAR-10 vs CIFAR-100 dataset? | AUROC, Network |
What metrics were used to measure the Input Complexity (Glow) model in the Input complexity and out-of-distribution detection with likelihood-based generative models paper on the Unlabeled CIFAR-10 vs CIFAR-100 dataset? | AUROC, Network |
What metrics were used to measure the Likelihood (Glow) model in the Input complexity and out-of-distribution detection with likelihood-based generative models paper on the Unlabeled CIFAR-10 vs CIFAR-100 dataset? | AUROC, Network |
What metrics were used to measure the Input Complexity (PixelCNN++) model in the Input complexity and out-of-distribution detection with likelihood-based generative models paper on the Unlabeled CIFAR-10 vs CIFAR-100 dataset? | AUROC, Network |
What metrics were used to measure the Likelihood (PixelCNN++) model in the Input complexity and out-of-distribution detection with likelihood-based generative models paper on the Unlabeled CIFAR-10 vs CIFAR-100 dataset? | AUROC, Network |
What metrics were used to measure the DevNet model in the Deep Anomaly Detection with Deviation Networks paper on the Census dataset? | AUC, Average Precision |
What metrics were used to measure the RbA model in the RbA: Segmenting Unknown Regions Rejected by All paper on the Road Anomaly dataset? | AP, FPR95 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.