prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the Band-Split RNN (semi-sup.) model in the Music Source Separation with Band-split RNN paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the TFC-TDF-UNet (v3) model in the Sound Demixing Challenge 2023 Music Demixing Track Technical Report: TFC-TDF-UNet v3 paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the Band-Split RNN model in the Music Source Separation with Band-split RNN paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the Hybrid Demucs model in the Hybrid Spectrogram and Waveform Source Separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the KUIELab-MDX-Net model in the KUIELab-MDX-Net: A Two-Stream Neural Network for Music Demixing paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the CDE-HTCN model in the Hierarchic Temporal Convolutional Network With Cross-Domain Encoder for Music Source Separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the Attentive-MultiResUNet model in the An Efficient Short-Time Discrete Cosine Transform and Attentive MultiResUNet Framework for Music Source Separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the DEMUCS (extra) model in the Music Source Separation in the Waveform Domain paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the CWS-PResUNet model in the CWS-PResUNet: Music Source Separation with Channel-wise Subband Phase-aware ResUNet paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the D3Net model in the D3Net: Densely connected multidilated DenseNet for music source separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the Conv-TasNet (extra) model in the Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the UMXL model in the Open-Unmix - A Reference Implementation for Music Source Separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the DEMUCS model in the Music Source Separation in the Waveform Domain paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the TAK2 model in the MMDenseLSTM: An efficient combination of convolutional and recurrent neural networks for audio source separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the D3Net model in the D3Net: Densely connected multidilated DenseNet for music source separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the Spleeter (MWF) model in the Spleeter: A Fast And State-of-the Art Music Source Separation Tool With Pre-trained Models paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the LaSAFT+GPoCM model in the LaSAFT: Latent Source Attentive Frequency Transformation for Conditioned Source Separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the X-UMX model in the All for One and One for All: Improving Music Separation by Bridging Networks paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the Conv-TasNet model in the Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the Sams-Net model in the Sams-Net: A Sliced Attention-based Neural Network for Music Source Separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the Meta-TasNet model in the Meta-learning Extractors for Music Source Separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the UMX model in the Open-Unmix - A Reference Implementation for Music Source Separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the Wavenet model in the End-to-end music source separation: is it possible in the waveform domain? paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the STL2 model in the Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation paper on the MUSDB18 dataset?
SDR (avg), SDR (vocals), SDR (drums), SDR (bass), SDR (other)
What metrics were used to measure the LQ-VAE + Scalable Transformer model in the Unsupervised Source Separation via Bayesian Inference in the Latent Domain paper on the Slakh2100 dataset?
SDR (bass), SDR (drums)
What metrics were used to measure the TriDet (VideoMAEv2) model in the Temporal Action Localization with Enhanced Instant Discriminability paper on the HACS dataset?
Average-mAP, mAP@0.5, mAP@0.75, mAP@0.95
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the HACS dataset?
Average-mAP, mAP@0.5, mAP@0.75, mAP@0.95
What metrics were used to measure the TriDet (SlowFast) model in the TriDet: Temporal Action Detection with Relative Boundary Modeling paper on the HACS dataset?
Average-mAP, mAP@0.5, mAP@0.75, mAP@0.95
What metrics were used to measure the TriDet (I3D RGB) model in the TriDet: Temporal Action Detection with Relative Boundary Modeling paper on the HACS dataset?
Average-mAP, mAP@0.5, mAP@0.75, mAP@0.95
What metrics were used to measure the TadTr (I3D RGB) model in the End-to-end Temporal Action Detection with Transformer paper on the HACS dataset?
Average-mAP, mAP@0.5, mAP@0.75, mAP@0.95
What metrics were used to measure the LoFi+G-TAD (RGB, RN18) model in the Low-Fidelity Video Encoder Optimization for Temporal Action Localization paper on the HACS dataset?
Average-mAP, mAP@0.5, mAP@0.75, mAP@0.95
What metrics were used to measure the SSN model in the HACS: Human Action Clips and Segments Dataset for Recognition and Temporal Localization paper on the HACS dataset?
Average-mAP, mAP@0.5, mAP@0.75, mAP@0.95
What metrics were used to measure the TriDet (VideoMAEv2) model in the Temporal Action Localization with Enhanced Instant Discriminability paper on the MultiTHUMOS dataset?
Average mAP, mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7, mAP IOU@0.8, mAP IOU@0.9
What metrics were used to measure the TriDet (I3D-rgb) model in the Temporal Action Localization with Enhanced Instant Discriminability paper on the MultiTHUMOS dataset?
Average mAP, mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7, mAP IOU@0.8, mAP IOU@0.9
What metrics were used to measure the TemporalMaxer model in the TemporalMaxer: Maximize Temporal Context with only Max Pooling for Temporal Action Localization paper on the MultiTHUMOS dataset?
Average mAP, mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7, mAP IOU@0.8, mAP IOU@0.9
What metrics were used to measure the PointTAD model in the PointTAD: Multi-Label Temporal Action Detection with Learnable Query Points paper on the MultiTHUMOS dataset?
Average mAP, mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7, mAP IOU@0.8, mAP IOU@0.9
What metrics were used to measure the PDAN model in the PDAN: Pyramid Dilated Attention Network for Action Detection paper on the MultiTHUMOS dataset?
Average mAP, mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7, mAP IOU@0.8, mAP IOU@0.9
What metrics were used to measure the MS-TCT model in the MS-TCT: Multi-Scale Temporal ConvTransformer for Action Detection paper on the MultiTHUMOS dataset?
Average mAP, mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7, mAP IOU@0.8, mAP IOU@0.9
What metrics were used to measure the MLAD model in the Modeling Multi-Label Action Dependencies for Temporal Action Localization paper on the MultiTHUMOS dataset?
Average mAP, mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7, mAP IOU@0.8, mAP IOU@0.9
What metrics were used to measure the AVFusion model in the Hear Me Out: Fusional Approaches for Audio Augmented Temporal Action Localization paper on the THUMOS'14 dataset?
mAP IOU@0.5
What metrics were used to measure the ActionFormer (SlowFast+Omnivore+EgoVLP) model in the Where a Strong Backbone Meets Strong Features -- ActionFormer for Ego4D Moment Queries Challenge paper on the Ego4D MQ test dataset?
Average mAP, Recall@1x (tIoU=0.5)
What metrics were used to measure the S-CNN model in the Temporal Action Localization in Untrimmed Videos via Multi-stage CNNs paper on the MEXaction2 dataset?
mAP
What metrics were used to measure the PRN+BMN (ensemble) model in the Proposal Relation Network for Temporal Action Detection paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the PRN (CSN) model in the Proposal Relation Network for Temporal Action Detection paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the TCANet (SlowFast R101) model in the Temporal Context Aggregation Network for Temporal Action Proposal Refinement paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the PRN (ViViT) model in the Proposal Relation Network for Temporal Action Detection paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the AVFusion model in the Hear Me Out: Fusional Approaches for Audio Augmented Temporal Action Localization paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the TriDet (TSP features) model in the TriDet: Temporal Action Detection with Relative Boundary Modeling paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the TadTR (TSP features) model in the End-to-end Temporal Action Detection with Transformer paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the ActionFormer (TSP feautures) model in the ActionFormer: Localizing Moments of Actions with Transformers paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the TAGS (I3D) model in the Proposal-Free Temporal Action Detection via Global Segmentation Mask Learning paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the VSGN (TSP features) model in the Video Self-Stitching Graph Network for Temporal Action Localization paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the TSP model in the TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the DCAN (TSN features) model in the DCAN: Improving Temporal Action Detection via Dual Context Aggregation paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the E2E-TAD (SlowFast R50+TadTR) model in the An Empirical Study of End-to-End Temporal Action Detection paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the LoFi+G-TAD model in the Low-Fidelity Video Encoder Optimization for Temporal Action Localization paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the BSN++ model in the BSN++: Complementary Boundary Regressor with Scale-Balanced Relation Modeling for Temporal Action Proposal Generation paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the G-TAD+BSP model in the Boundary-sensitive Pre-training for Temporal Localization in Videos paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the SSTAP@100%+ model in the Self-Supervised Learning for Semi-Supervised Temporal Action Proposal paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the BC-GNN model in the Boundary Content Graph Neural Network for Temporal Action Proposal Generation paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the GCM model in the Graph Convolutional Module for Temporal Action Localization in Videos paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the G-TAD model in the G-TAD: Sub-Graph Localization for Temporal Action Detection paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the BMN model in the BMN: Boundary-Matching Network for Temporal Action Proposal Generation paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the SSN model in the A Pursuit of Temporal Accuracy in General Activity Detection paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the P-GCN model in the Graph Convolutional Networks for Temporal Action Localization paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the BSN model in the BSN: Boundary Sensitive Network for Temporal Action Proposal Generation paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the UnLoc-L model in the UnLoc: A Unified Framework for Video Localization Tasks paper on the ActivityNet-1.3 dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the TemporalMaxer model in the TemporalMaxer: Maximize Temporal Context with only Max Pooling for Temporal Action Localization paper on the MUSES dataset?
mAP, mAP@0.3, mAP@0.4, mAP@0.5, mAP@0.6, mAP@0.7
What metrics were used to measure the MUSES model in the Multi-shot Temporal Event Localization: a Benchmark paper on the MUSES dataset?
mAP, mAP@0.3, mAP@0.4, mAP@0.5, mAP@0.6, mAP@0.7
What metrics were used to measure the VideoMAE V2-g model in the VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking paper on the FineAction dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the FineAction dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the BMN (i3d feaure) model in the BMN: Boundary-Matching Network for Temporal Action Proposal Generation paper on the FineAction dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the G-TAD (i3d feature) model in the G-TAD: Sub-Graph Localization for Temporal Action Detection paper on the FineAction dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the DBG (i3d feature) model in the Fast Learning of Temporal Action Proposal via Dense Boundary Generator paper on the FineAction dataset?
mAP, mAP IOU@0.5, mAP IOU@0.75, mAP IOU@0.95
What metrics were used to measure the TriDet (verb) model in the TriDet: Temporal Action Detection with Relative Boundary Modeling paper on the EPIC-KITCHENS-100 dataset?
Avg mAP (0.1-0.5), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5
What metrics were used to measure the TemporalMaxer (verb) model in the TemporalMaxer: Maximize Temporal Context with only Max Pooling for Temporal Action Localization paper on the EPIC-KITCHENS-100 dataset?
Avg mAP (0.1-0.5), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5
What metrics were used to measure the ActionFormer (verb) model in the ActionFormer: Localizing Moments of Actions with Transformers paper on the EPIC-KITCHENS-100 dataset?
Avg mAP (0.1-0.5), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5
What metrics were used to measure the G-TAD (verb) model in the G-TAD: Sub-Graph Localization for Temporal Action Detection paper on the EPIC-KITCHENS-100 dataset?
Avg mAP (0.1-0.5), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5
What metrics were used to measure the BMN (verb) model in the BMN: Boundary-Matching Network for Temporal Action Proposal Generation paper on the EPIC-KITCHENS-100 dataset?
Avg mAP (0.1-0.5), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5
What metrics were used to measure the VideoCLIP model in the VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding paper on the CrossTask dataset?
Recall
What metrics were used to measure the VLM model in the VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding paper on the CrossTask dataset?
Recall
What metrics were used to measure the TACo model in the TACo: Token-aware Cascade Contrastive Learning for Video-Text Alignment paper on the CrossTask dataset?
Recall
What metrics were used to measure the Text-Video Embedding model in the HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips paper on the CrossTask dataset?
Recall
What metrics were used to measure the Fully-supervised upper-bound model in the Cross-task weakly supervised learning from instructional videos paper on the CrossTask dataset?
Recall
What metrics were used to measure the Zhukov model in the Cross-task weakly supervised learning from instructional videos paper on the CrossTask dataset?
Recall
What metrics were used to measure the Alayrac model in the Unsupervised Learning from Narrated Instruction Videos paper on the CrossTask dataset?
Recall
What metrics were used to measure the BasicTAD (R50-SlowOnly) model in the BasicTAD: an Astounding RGB-Only Baseline for Temporal Action Detection paper on the THUMOS14 dataset?
Avg mAP (0.3:0.7)
What metrics were used to measure the ActionFormer (InternVideo features) model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the TriDet (VideoMAE v2-g feature) model in the Temporal Action Localization with Enhanced Instant Discriminability paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the ActionFormer (VideoMAE V2-g features) model in the VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the TriDet (I3D features) model in the TriDet: Temporal Action Detection with Relative Boundary Modeling paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the TemporalMaxer (I3D features) model in the TemporalMaxer: Maximize Temporal Context with only Max Pooling for Temporal Action Localization paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the ActionFormer (I3D features) model in the ActionFormer: Localizing Moments of Actions with Transformers paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the TadML(two-stream) model in the TadML: A fast temporal action detection with Mechanics-MLP paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the BasicTAD (160,6,192,R50-SlowOnly) model in the BasicTAD: an Astounding RGB-Only Baseline for Temporal Action Detection paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the TadTR model in the End-to-end Temporal Action Detection with Transformer paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the ReAct (TSN features) model in the ReAct: Temporal Action Detection with Relational Queries paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the BasicTAD (112,3,96,R50-SlowOnly) model in the BasicTAD: an Astounding RGB-Only Baseline for Temporal Action Detection paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7
What metrics were used to measure the E2E-TAD (SlowFast R50+TadTR) model in the An Empirical Study of End-to-End Temporal Action Detection paper on the THUMOS’14 dataset?
Avg mAP (0.3:0.7), mAP IOU@0.1, mAP IOU@0.2, mAP IOU@0.3, mAP IOU@0.4, mAP IOU@0.5, mAP IOU@0.6, mAP IOU@0.7