prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the LGD-3D RGB model in the Learning Spatio-Temporal Representation with Local and Global Diffusion paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the RGB-I3D (Imagenet+Kinetics pre-training) model in the Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the R[2+1]D-RGB (Kinetics pretrained) model in the A Closer Look at Spatiotemporal Convolutions for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the VidTr-L model in the VidTr: Video Transformer Without Convolutions paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the ADL+ResNet+IDT model in the Contrastive Video Representation Learning via Adversarial Perturbations paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the RGB-I3D (Kinetics pre-training) model in the Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Optical Flow Guided Feature model in the Optical Flow Guided Feature: A Fast and Robust Motion Representation for Video Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the R[2+1D]D-TwoStream (Sports1M pretrained) model in the A Closer Look at Spatiotemporal Convolutions for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the TVNet+IDT model in the End-to-End Learning of Motion Representation for Video Understanding paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the STM Network+IDT model in the Spatiotemporal Multiplier Networks for Video Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Prob-Distill model in the Attention Distillation for Learning Video Representations paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the DMC-Net (I3D) model in the DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the TesNet (ImageNet pretrained) model in the Learning spatio-temporal representations with temporal squeeze pooling paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the HF-ECOLite (ImageNet+Kinetics pretrain) model in the Hierarchical Feature Aggregation Networks for Video Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the ARTNet w/ TSN model in the Appearance-and-Relation Networks for Video Classification paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the ST-ResNet + IDT model in the Spatiotemporal Residual Networks for Video Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the R[2+1]D-Flow (Sports1M pretrained) model in the A Closer Look at Spatiotemporal Convolutions for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Temporal Segment Networks model in the Temporal Segment Networks: Towards Good Practices for Deep Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the TS-LSTM model in the TS-LSTM and Temporal-Inception: Exploiting Spatiotemporal Dynamics for Activity Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the SVT (finetune) model in the Self-supervised Video Transformer paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the R[2+1]D-RGB (Sports1M pretrained) model in the A Closer Look at Spatiotemporal Convolutions for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the TDD + IDT model in the Action Recognition with Trajectory-Pooled Deep-Convolutional Descriptors paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the VIMPAC model in the VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the S:VGG-16, T:VGG-16 (ImageNet pretrained) model in the Convolutional Two-Stream Network Fusion for Video Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Dynamic Image Networks + IDT model in the Dynamic Image Networks for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the LTC model in the Long-term Temporal Convolutions for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the R-STAN-50 model in the R-STAN: Residual Spatial-Temporal Attention Network for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the DMC-Net (ResNet-18) model in the DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the SUSiNet (multi, Kinetics pretrained) model in the SUSiNet: See, Understand and Summarize it paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Two-Stream (ImageNet pretrained) model in the Two-Stream Convolutional Networks for Action Recognition in Videos paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the SVT (linear) model in the Self-supervised Video Transformer paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the ActionFlowNet model in the ActionFlowNet: Learning Motion Representation for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the R-STAN-152 model in the R-STAN: Residual Spatial-Temporal Attention Network for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Res3D model in the ConvNet Architecture Search for Spatiotemporal Feature Learning paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the R(2+1)D-18 (DistInit pretraining) model in the DistInit: Learning Video Representations Without a Single Labeled Video paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the JRMN model in the Pose And Joint-Aware Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the CD-UAR model in the Towards Universal Representation for Unseen Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the C3D model in the Learning Spatiotemporal Features with 3D Convolutional Networks paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the R[2+1]D (VideoMoCo) model in the VideoMoCo: Contrastive Video Representation Learning with Temporally Adversarial Examples paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the 3D-ResNet-18 (VideoMoCo) model in the VideoMoCo: Contrastive Video Representation Learning with Temporally Adversarial Examples paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the LART (Hiera-H, K700 PT+FT) model in the On the Benefits of 3D Pose and Tracking for Human Action Recognition paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the Hiera-H (K700 PT+FT) model in the Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the VideoMAE V2-g model in the VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the STAR/L model in the End-to-End Spatio-Temporal Action Localisation with Video Transformers paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MVD (Kinetics400 pretrain+finetune, ViT-H, 16x4) model in the Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MVD (Kinetics400 pretrain, ViT-H, 16x4) model in the Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MaskFeat (Kinetics-600 pretrain, MViT-L) model in the Masked Feature Prediction for Self-Supervised Visual Pre-Training paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the UMT-L (ViT-L/16) model in the Unmasked Teacher: Towards Training-Efficient Video Foundation Models paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the VideoMAE (K400 pretrain+finetune, ViT-H, 16x4) model in the VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the VideoMAE (K700 pretrain+finetune, ViT-L, 16x4) model in the VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MVD (Kinetics400 pretrain+finetune, ViT-L, 16x4) model in the Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the VideoMAE (K400 pretrain+finetune, ViT-L, 16x4) model in the VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MVD (Kinetics400 pretrain, ViT-L, 16x4) model in the Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the VideoMAE (K400 pretrain, ViT-H, 16x4) model in the VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the VideoMAE (K700 pretrain, ViT-L, 16x4) model in the VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MeMViT-24 model in the MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MViTv2-L (IN21k, K700) model in the MViTv2: Improved Multiscale Vision Transformers for Classification and Detection paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the VideoMAE (K400 pretrain, ViT-L, 16x4) model in the VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MVD (Kinetics400 pretrain+finetune, ViT-B, 16x4) model in the Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the HIT model in the Holistic Interaction Transformer Network for Action Detection paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the VideoMAE (K400 pretrain+finetune, ViT-B, 16x4) model in the VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the ACAR-Net, SlowFast R-101 (Kinetics-700 pretraining) model in the Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MVD (Kinetics400 pretrain, ViT-B, 16x4) model in the Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the Object Transformer model in the Towards Long-Form Video Understanding paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MViT-B-24, 32x3 (Kinetics-600 pretraining) model in the Multiscale Vision Transformers paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MViT-B, 32x3 (Kinetics-500 pretraining) model in the Multiscale Vision Transformers paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the SlowFast, 16x8 R101+NL (Kinetics-600 pretraining) model in the SlowFast Networks for Video Recognition paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MViT-B, 64x3 (Kinetics-400 pretraining) model in the Multiscale Vision Transformers paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the SlowFast, 8x8 R101+NL (Kinetics-600 pretraining) model in the SlowFast Networks for Video Recognition paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MViT-B, 32x3 (Kinetics-400 pretraining) model in the Multiscale Vision Transformers paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the VideoMAE (K400 pretrain, ViT-B, 16x4) model in the VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the ORViT MViT-B, 16x4 (K400 pretraining) model in the Object-Region Video Transformers paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MViT-B, 16x4 (Kinetics-600 pretraining) model in the Multiscale Vision Transformers paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the MViT-B, 16x4 (Kinetics-400 pretraining) model in the Multiscale Vision Transformers paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the SlowFast, 8x8, R101 (Kinetics-400 pretraining) model in the SlowFast Networks for Video Recognition paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the SlowFast, 4x16, R50 (Kinetics-400 pretraining) model in the SlowFast Networks for Video Recognition paper on the AVA v2.2 dataset? | mAP |
What metrics were used to measure the 2DCNN+TRN model in the Win-Fail Action Recognition paper on the Win-Fail Action Understanding dataset? | 2-Class Accuracy |
What metrics were used to measure the IF+MD+RGB-R (ResNet-18) model in the SCSampler: Sampling Salient Clips from Video for Efficient Action Recognition paper on the miniSports dataset? | Accuracy |
What metrics were used to measure the IF+MD+RGB-R (ShuffleNet-26 ) model in the SCSampler: Sampling Salient Clips from Video for Efficient Action Recognition paper on the miniSports dataset? | Accuracy |
What metrics were used to measure the Structured Keypoint Pooling model in the Unified Keypoint-based Action Recognition Framework via Structured Keypoint Pooling paper on the Skeleton-Mimetics dataset? | Accuracy |
What metrics were used to measure the Action Machine (RGB only) model in the Action Machine: Rethinking Action Recognition in Trimmed Videos paper on the UTD-MHAD dataset? | Accuracy |
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the VideoMAE V2-g model in the VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the ATM model in the What Can Simple Arithmetic Operations Do for Temporal Modeling? paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the TAdaFormer-L/14 model in the Temporally-Adaptive Models for Efficient Video Understanding paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the UniFormerV2-L model in the UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the UniFormer-B (IN-1K + Kinetics400) model in the UniFormer: Unified Transformer for Efficient Spatial-Temporal Representation Learning paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the TAdaConvNeXtV2-B model in the Temporally-Adaptive Models for Efficient Video Understanding paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the TPS model in the Spatiotemporal Self-attention Modeling with Temporal Patch Shift for Action Recognition paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the MSMA (8+16frames) model in the Multi-scale Motion-Aware Module for Video Action Recognition paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the UniFormer-B (IN-1K + Kinetics600) model in the UniFormer: Unified Transformer for Efficient Spatial-Temporal Representation Learning paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the SIFA model in the Stand-Alone Inter-Frame Attention in Video Models paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the EAN ResNet50 (single clip, center crop,8+16 ensemble, with sparse Transformer) model in the EAN: Event Adaptive Network for Enhanced Action Recognition paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the TCM (Ensemble) model in the Motion-driven Visual Tempo Learning for Video-based Action Recognition paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the BQNEn (ImageNet + K400 pretrained) model in the Busy-Quiet Video Disentangling for Video Classification paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the TDN ResNet101 (one clip, center crop, 8+16 ensemble, ImageNet pretrained, RGB only) model in the TDN: Temporal Difference Networks for Efficient Action Recognition paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the SELFYNet-TSM-R50En (8+16 frames, ImageNet pretrained, 2 clips) model in the Learning Self-Similarity in Space and Time as Generalized Motion for Video Action Recognition paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the CT-Net Ensemble (R50, 8+12+16+24) model in the CT-Net: Channel Tensorization Network for Video Classification paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
What metrics were used to measure the MoDS (8+16frames) model in the Action Recognition With Motion Diversification and Dynamic Selection paper on the Something-Something V1 dataset? | Top 1 Accuracy, Top 5 Accuracy, Param., GFLOPs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.