prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the TadML(rgb-only) model in the TadML: A fast temporal action detection with Mechanics-MLP paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the MUSES model in the Multi-shot Temporal Event Localization: a Benchmark paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the AVFusion model in the Hear Me Out: Fusional Approaches for Audio Augmented Temporal Action Localization paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the TAGS (I3D) model in the Proposal-Free Temporal Action Detection via Global Segmentation Mask Learning paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the DCAN (TSN features) model in the DCAN: Improving Temporal Action Detection via Dual Context Aggregation paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the TSP model in the TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the VSGN model in the Video Self-Stitching Graph Network for Temporal Action Localization paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the DaoTAD model in the RGB Stream Is Enough for Temporal Action Detection paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the Decouple-SSAD model in the Decoupling Localization and Classification in Single Shot Temporal Action Detection paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the TAL-Net model in the Rethinking the Faster R-CNN Architecture for Temporal Action Localization paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the GCM model in the Graph Convolutional Module for Temporal Action Localization in Videos paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the AGT (Ours) model in the Activity Graph Transformer for Temporal Action Localization paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the P-GCN model in the Graph Convolutional Networks for Temporal Action Localization paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the DeepMetricLearner model in the Weakly Supervised Temporal Action Localization Using Deep Metric Learning paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the CBR-TS model in the Cascaded Boundary Regression for Temporal Action Detection paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the R-C3D model in the R-C3D: Region Convolutional 3D Network for Temporal Activity Detection paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the TURN-FL-16 + S-CNN model in the TURN TAP: Temporal Unit Regression Network for Temporal Action Proposals paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the Yeung et al. model in the End-to-end Learning of Action Detection from Frame Glimpses in Videos paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the S-CNN model in the Temporal Action Localization in Untrimmed Videos via Multi-stage CNNs paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the BSN UNet model in the BSN: Boundary Sensitive Network for Temporal Action Proposal Generation paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the CDC model in the CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the G-TAD model in the G-TAD: Sub-Graph Localization for Temporal Action Detection paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the BMN model in the BMN: Boundary-Matching Network for Temporal Action Proposal Generation paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the ActionFormer (SlowFast+Omnivore+EgoVLP) model in the Where a Strong Backbone Meets Strong Features -- ActionFormer for Ego4D Moment Queries Challenge paper on the Ego4D MQ val dataset?
Average mAP, Recall@1x (tIoU=0.5)
What metrics were used to measure the DeepMetricLearner model in the Weakly Supervised Temporal Action Localization Using Deep Metric Learning paper on the ActivityNet-1.2 dataset?
mAP IOU@0.5, mAP IOU@0.1, mAP IOU@0.3, mAP IOU@0.7
What metrics were used to measure the DFC model in the Learning Deep Feature Correspondence for Unsupervised Anomaly Detection and Segmentation paper on the BottleCap dataset?
AUCROC, IOU, AUC(image-level), AUC-PRO
What metrics were used to measure the PCA via oversampling model in the Anomaly Detection via oversampling Principal Component Analysis paper on the kdd 99 dataset?
AUCROC
What metrics were used to measure the SAA+ model in the Segment Any Anomaly without Training via Hybrid Prompt Regularization paper on the KSDD2 dataset?
F1-Score
What metrics were used to measure the DIF model in the Deep Isolation Forest for Anomaly Detection paper on the NB15-Analysis dataset?
AUC
What metrics were used to measure the RCALAD model in the Spot The Odd One Out: Regularized Complete Cycle Consistent Anomaly Detector GAN paper on the CIFAR-10 dataset?
Mean AUC
What metrics were used to measure the Background-Agnostic Framework model in the A Background-Agnostic Framework with Adversarial Training for Abnormal Event Detection in Video paper on the UCSD Peds2 dataset?
AUC
What metrics were used to measure the SSMTL model in the Anomaly Detection in Video via Self-Supervised and Multi-Task Learning paper on the UCSD Peds2 dataset?
AUC
What metrics were used to measure the OMAE model in the Object-centric and memory-guided normality reconstruction for video anomaly detection paper on the UCSD Peds2 dataset?
AUC
What metrics were used to measure the MVT-Flow model in the The voraus-AD Dataset for Anomaly Detection in Robot Applications paper on the voraus-AD dataset?
Avg. Detection AUROC
What metrics were used to measure the LSTM-VAE model in the A Multimodal Anomaly Detector for Robot-Assisted Feeding Using an LSTM-based Variational Autoencoder paper on the voraus-AD dataset?
Avg. Detection AUROC
What metrics were used to measure the GANF model in the Graph-Augmented Normalizing Flows for Anomaly Detection of Multiple Time Series paper on the voraus-AD dataset?
Avg. Detection AUROC
What metrics were used to measure the kNN model in the Anomaly Detection Requires Better Representations paper on the ODDS dataset?
AUROC, F1
What metrics were used to measure the ICL model in the Anomaly Detection Requires Better Representations paper on the ODDS dataset?
AUROC, F1
What metrics were used to measure the GOAD model in the Anomaly Detection Requires Better Representations paper on the ODDS dataset?
AUROC, F1
What metrics were used to measure the MMR model in the Industrial Anomaly Detection with Domain Shift: A Real-world Dataset and Masked Multi-scale Reconstruction paper on the AeBAD-V dataset?
Detection AUROC
What metrics were used to measure the ReverseDistillation model in the Anomaly Detection via Reverse Distillation from One-Class Embedding paper on the AeBAD-V dataset?
Detection AUROC
What metrics were used to measure the PatchCore model in the Towards Total Recall in Industrial Anomaly Detection paper on the AeBAD-V dataset?
Detection AUROC
What metrics were used to measure the DRAEM model in the DRAEM - A Discriminatively Trained Reconstruction Embedding for Surface Anomaly Detection paper on the AeBAD-V dataset?
Detection AUROC
What metrics were used to measure the NSA model in the Natural Synthetic Anomalies for Self-Supervised Anomaly Detection and Localization paper on the AeBAD-V dataset?
Detection AUROC
What metrics were used to measure the RIAD model in the Reconstruction by Inpainting for Visual Anomaly Detection paper on the AeBAD-V dataset?
Detection AUROC
What metrics were used to measure the InTra model in the Inpainting Transformer for Anomaly Detection paper on the AeBAD-V dataset?
Detection AUROC
What metrics were used to measure the CS-Flow (unsupervised) model in the Fully Convolutional Cross-Scale-Flows for Image-based Defect Detection paper on the Surface Defect Saliency of Magnetic Tile dataset?
Detection AUROC, Segmentation AUROC
What metrics were used to measure the DifferNet (unsupervised) model in the Same Same But DifferNet: Semi-Supervised Defect Detection with Normalizing Flows paper on the Surface Defect Saliency of Magnetic Tile dataset?
Detection AUROC, Segmentation AUROC
What metrics were used to measure the MCuePush (supervised) model in the Surface Defect Saliency of Magnetic Tile paper on the Surface Defect Saliency of Magnetic Tile dataset?
Detection AUROC, Segmentation AUROC
What metrics were used to measure the GAN-based Anomaly Detection in Imbalance Problems model in the GAN-based Anomaly Detection in Imbalance Problems paper on the MNIST dataset?
ROC AUC
What metrics were used to measure the IGD (pre-trained ImageNet) model in the Deep One-Class Classification via Interpolated Gaussian Descriptor paper on the MNIST dataset?
ROC AUC
What metrics were used to measure the IGD (scratch) model in the Deep One-Class Classification via Interpolated Gaussian Descriptor paper on the MNIST dataset?
ROC AUC
What metrics were used to measure the DASVDD model in the DASVDD: Deep Autoencoding Support Vector Data Descriptor for Anomaly Detection paper on the MNIST dataset?
ROC AUC
What metrics were used to measure the LIS-AE model in the Latent-Insensitive autoencoders for Anomaly Detection paper on the MNIST dataset?
ROC AUC
What metrics were used to measure the P-KDGAN model in the P-KDGAN: Progressive Knowledge Distillation with GANs for One-class Novelty Detection paper on the MNIST dataset?
ROC AUC
What metrics were used to measure the SINBAD model in the Set Features for Fine-grained Anomaly Detection paper on the UEA time-series datasets dataset?
Avg. ROC-AUC
What metrics were used to measure the GOAD model in the Classification-Based Anomaly Detection for General Data paper on the UEA time-series datasets dataset?
Avg. ROC-AUC
What metrics were used to measure the DROCC model in the DROCC: Deep Robust One-Class Classification paper on the UEA time-series datasets dataset?
Avg. ROC-AUC
What metrics were used to measure the GAN based Anomaly Detection in Imbalance Problems model in the GAN-based Anomaly Detection in Imbalance Problems paper on the Fashion-MNIST dataset?
ROC AUC
What metrics were used to measure the PANDA model in the PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation paper on the Fashion-MNIST dataset?
ROC AUC
What metrics were used to measure the Reverse Distillation model in the Anomaly Detection via Reverse Distillation from One-Class Embedding paper on the Fashion-MNIST dataset?
ROC AUC
What metrics were used to measure the IGD (pre-trained SSL) model in the Deep One-Class Classification via Interpolated Gaussian Descriptor paper on the Fashion-MNIST dataset?
ROC AUC
What metrics were used to measure the IGD (pre-trained ImageNet) model in the Deep One-Class Classification via Interpolated Gaussian Descriptor paper on the Fashion-MNIST dataset?
ROC AUC
What metrics were used to measure the Self-Supervised One-class SVM, RBF kernel model in the PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation paper on the Fashion-MNIST dataset?
ROC AUC
What metrics were used to measure the DASVDD model in the DASVDD: Deep Autoencoding Support Vector Data Descriptor for Anomaly Detection paper on the Fashion-MNIST dataset?
ROC AUC
What metrics were used to measure the Shell-based Anomaly (supervised) model in the Shell Theory: A Statistical Model of Reality paper on the Fashion-MNIST dataset?
ROC AUC
What metrics were used to measure the IGD (scratch) model in the Deep One-Class Classification via Interpolated Gaussian Descriptor paper on the Fashion-MNIST dataset?
ROC AUC
What metrics were used to measure the PANDA-OE model in the PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation paper on the Fashion-MNIST dataset?
ROC AUC
What metrics were used to measure the Self-Supervised DeepSVDD model in the PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation paper on the Fashion-MNIST dataset?
ROC AUC
What metrics were used to measure the P-KDGAN model in the P-KDGAN: Progressive Knowledge Distillation with GANs for One-class Novelty Detection paper on the Fashion-MNIST dataset?
ROC AUC
What metrics were used to measure the RCALAD model in the Spot The Odd One Out: Regularized Complete Cycle Consistent Anomaly Detector GAN paper on the SVHN dataset?
Mean AUC
What metrics were used to measure the Shell-based Anomaly (supervised) model in the Shell Theory: A Statistical Model of Reality paper on the STL-10 dataset?
ROC AUC
What metrics were used to measure the RCALAD model in the Spot The Odd One Out: Regularized Complete Cycle Consistent Anomaly Detector GAN paper on the Musk v1 dataset?
F1-Score
What metrics were used to measure the Mask2Anomaly model in the Unmasking Anomalies in Road-Scene Segmentation paper on the Lost and Found dataset?
AP, FPR
What metrics were used to measure the PEBAL model in the Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Complex Urban Driving Scenes paper on the Lost and Found dataset?
AP, FPR
What metrics were used to measure the SynBoost model in the Pixel-wise Anomaly Detection in Complex Driving Scenes paper on the Lost and Found dataset?
AP, FPR
What metrics were used to measure the SML model in the Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation paper on the Lost and Found dataset?
AP, FPR
What metrics were used to measure the HTM AL model in the Unsupervised real-time anomaly detection for streaming data paper on the Numenta Anomaly Benchmark dataset?
NAB score
What metrics were used to measure the CAD OSE model in the Unsupervised real-time anomaly detection for streaming data paper on the Numenta Anomaly Benchmark dataset?
NAB score
What metrics were used to measure the ARIMA AD model in the Online Forecasting and Anomaly Detection Based on the ARIMA Model paper on the Numenta Anomaly Benchmark dataset?
NAB score
What metrics were used to measure the Numenta HTM model in the Evaluating Real-time Anomaly Detection Algorithms - the Numenta Anomaly Benchmark paper on the Numenta Anomaly Benchmark dataset?
NAB score
What metrics were used to measure the nab-comportex model in the Unsupervised real-time anomaly detection for streaming data paper on the Numenta Anomaly Benchmark dataset?
NAB score
What metrics were used to measure the Twitter ADVec v1.0.0 model in the Evaluating Real-time Anomaly Detection Algorithms - the Numenta Anomaly Benchmark paper on the Numenta Anomaly Benchmark dataset?
NAB score
What metrics were used to measure the Etsy Skyline model in the Evaluating Real-time Anomaly Detection Algorithms - the Numenta Anomaly Benchmark paper on the Numenta Anomaly Benchmark dataset?
NAB score
What metrics were used to measure the Bayesian Changepoint model in the Real-Time Anomaly Detection for Streaming Analytics paper on the Numenta Anomaly Benchmark dataset?
NAB score
What metrics were used to measure the Random model in the Evaluating Real-time Anomaly Detection Algorithms - the Numenta Anomaly Benchmark paper on the Numenta Anomaly Benchmark dataset?
NAB score
What metrics were used to measure the Sliding Threshold model in the Real-Time Anomaly Detection for Streaming Analytics paper on the Numenta Anomaly Benchmark dataset?
NAB score
What metrics were used to measure the DevNet model in the Deep Anomaly Detection with Deviation Networks paper on the Thyroid dataset?
AUC, Average Precision, F1-Score
What metrics were used to measure the RCALAD model in the Spot The Odd One Out: Regularized Complete Cycle Consistent Anomaly Detector GAN paper on the Thyroid dataset?
AUC, Average Precision, F1-Score
What metrics were used to measure the BCE-CLIP model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the Leave-One-Class-Out CIFAR-10 dataset?
AUROC
What metrics were used to measure the CLIP (zero shot) model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the Leave-One-Class-Out CIFAR-10 dataset?
AUROC
What metrics were used to measure the Binary Cross Entropy (OE) model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the Leave-One-Class-Out CIFAR-10 dataset?
AUROC
What metrics were used to measure the HSC model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the Leave-One-Class-Out CIFAR-10 dataset?
AUROC
What metrics were used to measure the DSAD model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the Leave-One-Class-Out CIFAR-10 dataset?
AUROC
What metrics were used to measure the DSVDD model in the Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images paper on the Leave-One-Class-Out CIFAR-10 dataset?
AUROC
What metrics were used to measure the DAC (STG-NF + Jigsaw) model in the Divide and Conquer in Video Anomaly Detection: A Comprehensive Review and New Approach paper on the ShanghaiTech dataset?
AUC, RBDC, TBDC
What metrics were used to measure the AI-VAD model in the Attribute-based Representations for Accurate and Interpretable Video Anomaly Detection paper on the ShanghaiTech dataset?
AUC, RBDC, TBDC
What metrics were used to measure the STG-NF model in the Normalizing Flows for Human Pose Anomaly Detection paper on the ShanghaiTech dataset?
AUC, RBDC, TBDC
What metrics were used to measure the Jigsaw-VAD model in the Video Anomaly Detection by Solving Decoupled Spatio-Temporal Jigsaw Puzzles paper on the ShanghaiTech dataset?
AUC, RBDC, TBDC
What metrics were used to measure the SSMTL++v2 model in the SSMTL++: Revisiting Self-Supervised Multi-Task Learning for Video Anomaly Detection paper on the ShanghaiTech dataset?
AUC, RBDC, TBDC