prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the CD-UAR model in the Towards Universal Representation for Unseen Action Recognition paper on the UCF101 dataset?
3-fold Accuracy, Accuracy
What metrics were used to measure the SL model in the paper on the UCF101 dataset?
3-fold Accuracy, Accuracy
What metrics were used to measure the I3D + PoTion model in the PoTion: Pose MoTion Representation for Action Recognition paper on the UCF101 dataset?
3-fold Accuracy, Accuracy
What metrics were used to measure the R3D-18 model in the Federated Self-supervised Learning for Video Understanding paper on the UCF101 dataset?
3-fold Accuracy, Accuracy
What metrics were used to measure the MSQNet model in the Actor-agnostic Multi-label Action Recognition with Multi-modal Query paper on the THUMOS14 dataset?
Accuracy
What metrics were used to measure the MSQNet model in the Actor-agnostic Multi-label Action Recognition with Multi-modal Query paper on the Hockey dataset?
Accuracy
What metrics were used to measure the TSM+W3 - full res model in the Knowing What, Where and When to Look: Efficient Video Action Modeling with Attention paper on the EPIC-KITCHENS-55 dataset?
Top-1 Accuracy
What metrics were used to measure the C3D-AVG model in the What and How Well You Performed? A Multitask Learning Approach to Action Quality Assessment paper on the MTL-AQA dataset?
Position Accuracy, Armstand Accuracy, Rotation Type Accuracy, No. of Somersaults Accuracy, No. of Twists Accuracy
What metrics were used to measure the MSQNet model in the Actor-agnostic Multi-label Action Recognition with Multi-modal Query paper on the Animal Kingdom dataset?
mAP
What metrics were used to measure the CARe model in the Animal Kingdom: A Large and Diverse Dataset for Animal Behavior Understanding paper on the Animal Kingdom dataset?
mAP
What metrics were used to measure the DeVTr model in the Data Efficient Video Transformer for Violence Detection paper on the Real Life Violence Situations Dataset dataset?
accuracy
What metrics were used to measure the Temporal Fusion cnn+lstm model in the A Temporal Fusion Approach for Video Classification with Convolutional and LSTM Neural Networks Applied to Violence Detection paper on the Real Life Violence Situations Dataset dataset?
accuracy
What metrics were used to measure the CNN+LSTM model in the Violence Recognition from Videos using Deep Learning Techniques paper on the Real Life Violence Situations Dataset dataset?
accuracy
What metrics were used to measure the MVD (Kinetics400 pretrain, ViT-H, 16 frame) model in the Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the VideoMAE V2-g model in the VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MVD (Kinetics400 pretrain, ViT-L, 16 frame) model in the Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the Hiera-L (no extra data) model in the Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the TubeViT-L model in the Rethinking Video ViTs: Sparse Video Tubes for Joint Image and Video Learning paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the VideoMAE (no extra data, ViT-L, 32x2) model in the VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MaskFeat (Kinetics600 pretrain, MViT-L) model in the Masked Feature Prediction for Self-Supervised Visual Pre-Training paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MAR (50% mask, ViT-L, 16x4) model in the MAR: Masked Autoencoders for Efficient Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the ATM model in the What Can Simple Arithmetic Operations Do for Temporal Modeling? paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MAWS (ViT-L) model in the The effectiveness of MAE pre-pretraining for billion-scale pretraining paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the VideoMAE (no extra data, ViT-L, 16frame) model in the VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MAR (75% mask, ViT-L, 16x4) model in the MAR: Masked Autoencoders for Efficient Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MVD (Kinetics400 pretrain, ViT-B, 16 frame) model in the Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the TAdaFormer-L/14 model in the Temporally-Adaptive Models for Efficient Video Understanding paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MViTv2-L (IN-21K + Kinetics400 pretrain) model in the MViTv2: Improved Multiscale Vision Transformers for Classification and Detection paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the UniFormerV2-L model in the UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the ST-Adapter (ViT-L, CLIP) model in the ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the ZeroI2V ViT-L/14 model in the ZeroI2V: Zero-Cost Adaptation of Pre-trained Transformers from Image to Video paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MViT-B (IN-21K + Kinetics400 pretrain) model in the MViTv2: Improved Multiscale Vision Transformers for Classification and Detection paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the OMNIVORE (Swin-B, IN-21K+ Kinetics400 pretrain) model in the Omnivore: A Single Model for Many Visual Modalities paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the BEVT (IN-1K + Kinetics400 pretrain) model in the BEVT: BERT Pretraining of Video Transformers paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the UniFormer-B (IN-1K + Kinetics400 pretrain) model in the UniFormer: Unified Transformer for Efficient Spatial-Temporal Representation Learning paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the TAdaConvNeXtV2-B model in the Temporally-Adaptive Models for Efficient Video Understanding paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MAR (50% mask, ViT-B, 16x4) model in the MAR: Masked Autoencoders for Efficient Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MVD (Kinetics400 pretrain, ViT-S, 16 frame) model in the Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the CoVeR(JFT-3B) model in the Co-training Transformer with Videos and Images Improves Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the VideoMAE (no extra data, ViT-B, 16frame) model in the VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the ILA (ViT-L/14) model in the Implicit Temporal Modeling with Learnable Alignment for Video Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MorphMLP-B (IN-1K) model in the MorphMLP: An Efficient MLP-Like Backbone for Spatial-Temporal Representation Learning paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the CoVeR(JFT-300M) model in the Co-training Transformer with Videos and Images Improves Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the TPS model in the Spatiotemporal Self-attention Modeling with Temporal Patch Shift for Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the SIFA model in the Stand-Alone Inter-Frame Attention in Video Models paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the Swin-B (IN-21K + Kinetics400 pretrain) model in the Video Swin Transformer paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the TDN ResNet101 (one clip, three crop, 8+16 ensemble, ImageNet pretrained, RGB only) model in the TDN: Temporal Difference Networks for Efficient Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MAR (75% mask, ViT-B, 16x4) model in the MAR: Masked Autoencoders for Efficient Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the ORViT Mformer-L (ORViT blocks) model in the Object-Region Video Transformers paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the UniFormer-S (IN-1K + Kinetics600 pretrain) model in the UniFormer: Unified Transformer for Efficient Spatial-Temporal Representation Learning paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MML (ensemble) model in the Mutual Modality Learning for Video Action Classification paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MViT-B-24, 32x3 model in the Multiscale Vision Transformers paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MTV-B model in the Multiview Transformers for Video Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MLP-3D model in the MLP-3D: A MLP-like 3D Architecture with Grouped Time Mixing paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the TDN ResNet101 (one clip, center crop, 8+16 ensemble, ImageNet pretrained, RGB only) model in the TDN: Temporal Difference Networks for Efficient Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MSMA (8+16frames) model in the Multi-scale Motion-Aware Module for Video Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the Mformer-L model in the Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the VIMPAC model in the VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the ORViT Mformer (ORViT blocks) model in the Object-Region Video Transformers paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MViT-B, 32x3(Kinetics600 pretrain) model in the Multiscale Vision Transformers paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the GC-TDN Ensemble (R50,8+16) model in the Group Contextualization for Video Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the CT-Net Ensemble (R50, 8+12+16+24) model in the CT-Net: Channel Tensorization Network for Video Classification paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the TCM (Ensemble) model in the Motion-driven Visual Tempo Learning for Video-based Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the SELFYNet-TSM-R50En (8+16 frames, ImageNet pretrained, 2 clips) model in the Learning Self-Similarity in Space and Time as Generalized Motion for Video Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the RSANet-R50 (8+16 frames, ImageNet pretrained, 2 clips model in the Relational Self-Attention: What's Missing in Attention for Video Understanding paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the GTDNet model in the Global Temporal Difference Network for Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the SELFYNet-TSM-R50En (8+16 frames, ImageNet pretrained, a single clip) model in the Learning Self-Similarity in Space and Time as Generalized Motion for Video Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the VoV3D-L (32frames, Kinetics pretrained, single) model in the Diverse Temporal Aggregation and Depthwise Spatiotemporal Factorization for Efficient Video Classification paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the PLAR model in the PLAR: Prompt Learning for Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the RSANet-R50 (8+16 frames, ImageNet pretrained, a single clip) model in the Relational Self-Attention: What's Missing in Attention for Video Understanding paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the X-Vit (x16) model in the Space-time Mixing Attention for Video Transformer paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the TAda2D-En (ResNet-50, 8+16 frames) model in the TAda! Temporally-Adaptive Convolutions for Video Understanding paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the Mformer-HR model in the Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the TAdaConvNeXt-T model in the TAda! Temporally-Adaptive Convolutions for Video Understanding paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MoDS (8+16frames) model in the Action Recognition With Motion Diversification and Dynamic Selection paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the STPG (8+16frames) model in the Spatial-Temporal Pyramid Graph Reasoning for Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MML (single) model in the Mutual Modality Learning for Video Action Classification paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the ILA (ViT-B/16) model in the Implicit Temporal Modeling with Learnable Alignment for Video Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the TSM (RGB + Flow) model in the TSM: Temporal Shift Module for Efficient Video Understanding paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MSNet-R50En (8+16 ensemble, ImageNet pretrained) model in the MotionSqueeze: Neural Motion Feature Learning for Video Understanding paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the PAN ResNet101 (RGB only, no Flow) model in the PAN: Towards Fast Action Recognition via Learning Persistence of Appearance paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the TSM+W3 (16 frames, RGB ResNet-50) model in the Knowing What, Where and When to Look: Efficient Video Action Modeling with Attention paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the Mformer model in the Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MVFNet-ResNet50 (center crop, 8+16 ensemble, ImageNet pretrained, RGB only) model in the MVFNet: Multi-View Fusion Network for Efficient Video Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MViT-B, 16x4 model in the Multiscale Vision Transformers paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the RSANet-R50 (16 frames, ImageNet pretrained, a single clip) model in the Relational Self-Attention: What's Missing in Attention for Video Understanding paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the VoV3D-L (32frames, from scratch, single) model in the Diverse Temporal Aggregation and Depthwise Spatiotemporal Factorization for Efficient Video Classification paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the E3D-L model in the Maximizing Spatio-Temporal Entropy of Deep 3D CNNs for Efficient Video Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the SELFYNet-TSM-R50 (16 frames, ImageNet pretrained) model in the Learning Self-Similarity in Space and Time as Generalized Motion for Video Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the TAda2D (ResNet-50, 16 frames) model in the TAda! Temporally-Adaptive Convolutions for Video Understanding paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the ViViT-L/16x2 Fact. encoder model in the ViViT: A Video Vision Transformer paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the VoV3D-M (32frames, Kinetics pretrained, single) model in the Diverse Temporal Aggregation and Depthwise Spatiotemporal Factorization for Efficient Video Classification paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the bLVNet model in the More Is Less: Learning Efficient Video Representations by Big-Little Network and Depthwise Temporal Aggregation paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the DirecFormer model in the DirecFormer: A Directed Attention in Transformer Approach to Robust Action Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the RSANet-R50 (8 frames, ImageNet pretrained, a single clip) model in the Relational Self-Attention: What's Missing in Attention for Video Understanding paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the MSNet-R50 (16 frames, ImageNet pretrained) model in the MotionSqueeze: Neural Motion Feature Learning for Video Understanding paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the AK-Net model in the Action Keypoint Network for Efficient Video Recognition paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the VoV3D-M (32frames, from scratch, single) model in the Diverse Temporal Aggregation and Depthwise Spatiotemporal Factorization for Efficient Video Classification paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs
What metrics were used to measure the VoV3D-L (16frames, from scratch, single) model in the Diverse Temporal Aggregation and Depthwise Spatiotemporal Factorization for Efficient Video Classification paper on the Something-Something V2 dataset?
Top-1 Accuracy, Top-5 Accuracy, Parameters, GFLOPs