prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the VLM model in the VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank
What metrics were used to measure the VideoCLIP (zero-shot) model in the VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank
What metrics were used to measure the VideoCoCa (zero-shot) model in the VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank
What metrics were used to measure the COOT model in the COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank
What metrics were used to measure the Text-Video Embedding model in the HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank
What metrics were used to measure the RoME model in the RoME: Role-aware Mixture-of-Expert Transformer for Text-to-Video Retrieval paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank
What metrics were used to measure the Satar et al. model in the Semantic Role Aware Correlation Transformer for Text to Video Retrieval paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank
What metrics were used to measure the HGLMM FV CCA model in the Associating Neural Word Embeddings With Deep Image Representations Using Fisher Vectors paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank
What metrics were used to measure the Hero w/ pre-training model in the HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training paper on the TVR dataset?
R@10, R@1, R@100
What metrics were used to measure the XML (Lei et al., 2020) model in the TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval paper on the TVR dataset?
R@10, R@1, R@100
What metrics were used to measure the MDMMT-2 model in the MDMMT-2: Multidomain Multimodal Transformer for Video Retrieval, One More Step Towards Generalization paper on the TGIF dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank
What metrics were used to measure the LAFF model in the Lightweight Attentional Feature Fusion: A New Baseline for Text-to-Video Retrieval paper on the TGIF dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank
What metrics were used to measure the UMT-L (ViT-L/16) model in the Unmasked Teacher: Towards Training-Efficient Video Foundation Models paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the HunYuan_tvr (huge) model in the Tencent Text-Video Retrieval: Hierarchical Cross-Modal Interactions with Multi-Level Representations paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the mPLUG-2 model in the mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the VALOR model in the VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the CLIP-ViP model in the CLIP-ViP: Adapting Pre-trained Image-Text Model to Video-Language Representation Alignment paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the HunYuan_tvr model in the Tencent Text-Video Retrieval: Hierarchical Cross-Modal Interactions with Multi-Level Representations paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the STAN model in the Revisiting Temporal Modeling for CLIP-based Image-to-Video Knowledge Transferring paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the HiTeA model in the HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the MDMMT-2 model in the MDMMT-2: Multidomain Multimodal Transformer for Video Retrieval, One More Step Towards Generalization paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the X-CLIP model in the X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the EMCL-Net++ model in the Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the CAMoE model in the Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the X-Pool model in the X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the Clover model in the Clover: Towards A Unified Video-Language Alignment and Fusion Model paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the DiffusionRet model in the DiffusionRet: Generative Text-Video Retrieval with Diffusion Model paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the CenterCLIP (ViT-B/16) model in the CenterCLIP: Token Clustering for Efficient Text-Video Retrieval paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the VIOLETv2 model in the An Empirical Study of End-to-End Video-Language Transformers with Masked Visual Modeling paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the EMCL-Net model in the Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the QB-Norm+CLIP4Clip model in the Cross Modal Retrieval with Querybank Normalisation paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the CLIP4Clip model in the CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the MDMMT model in the MDMMT: Multidomain Multimodal Transformer for Video Retrieval paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the HD-VILA model in the Advancing High-Resolution Video-Language Representation with Large-Scale Video Transcriptions paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the FROZEN model in the Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the Ours model in the Video and Text Matching with Conditioned Embeddings paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the MMT-Pretrained model in the Multi-modal Transformer for Video Retrieval paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the MMT model in the Multi-modal Transformer for Video Retrieval paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the CLIP model in the A Straightforward Framework For Video Retrieval Using CLIP paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the Collaborative Experts model in the Use What You Have: Video Retrieval Using Representations From Collaborative Experts paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the MoEE model in the Learning a Text-Video Embedding from Incomplete and Heterogeneous Data paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the JSFusion model in the A Joint Sequence Fusion Model for Video Question Answering and Retrieval paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the Large-Scale Discriminative Clustering model in the Learning from Video and Text via Large-Scale Discriminative Clustering paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the Text-Video Embedding model in the HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the CT-SAN model in the End-to-end Concept Word Detection for Video Captioning, Retrieval, and Question Answering paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the EMCL-Net (Ours)++ LSMDC Rohrbach et al. (2015) model in the Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations paper on the LSMDC dataset?
text-to-video R@1, text-to-video Median Rank, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5, video-to-text Mean Rank
What metrics were used to measure the PO Loss model in the Rudder: A Cross Lingual Video and Text Retrieval Dataset paper on the RUDDER dataset?
text-to-video Mean Rank, text-to-video Median Rank, text-to-video R@1, text-to-video R@10, text-to-video R@5, text-to-video R@50, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10, video-to-text R@5
What metrics were used to measure the VAST model in the VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the VALOR model in the VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the UMT-L (ViT-L/16) model in the Unmasked Teacher: Towards Training-Efficient Video Foundation Models paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the VLAB model in the VLAB: Enhancing Video Language Pre-training by Feature Adapting and Blending paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the Aurora (ours, r=64) model in the Parameter-efficient Tuning of Large-scale Multimodal Foundation Model paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the TEFAL model in the Audio-Enhanced Text-to-Video Retrieval using Text-Conditioned Feature Alignment paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the UCoFiA model in the Unified Coarse-to-Fine Alignment for Video-Text Retrieval paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the OmniVL model in the OmniVL:One Foundation Model for Image-Language and Video-Language Tasks paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the CLIP4Clip-seqTransf model in the CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the All-in-one + MELTR model in the MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the VIOLETv2 model in the An Empirical Study of End-to-End Video-Language Transformers with Masked Visual Modeling paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the HD-VILA model in the Advancing High-Resolution Video-Language Representation with Large-Scale Video Transcriptions paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the VideoCoCa (zero-shot) model in the VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the MDMMT-2 model in the MDMMT-2: Multidomain Multimodal Transformer for Video Retrieval, One More Step Towards Generalization paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the VIOLET + MELTR model in the MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the CLIP2TV model in the CLIP2TV: Align, Match and Distill for Video-Text Retrieval paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the CAMoE model in the Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the FROZEN model in the Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the COTS model in the COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the CoCa (zero-shot) model in the CoCa: Contrastive Captioners are Image-Text Foundation Models paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the CLIP2Video model in the CLIP2Video: Mastering Video-Text Retrieval via Image CLIP paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the LAFF model in the Lightweight Attentional Feature Fusion: A New Baseline for Text-to-Video Retrieval paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the UniVL + MELTR model in the MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the Ours model in the Video and Text Matching with Conditioned Embeddings paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the TACo model in the TACo: Token-aware Cascade Contrastive Learning for Video-Text Alignment paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the MDMMT model in the MDMMT: Multidomain Multimodal Transformer for Video Retrieval paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the CLIP model in the A Straightforward Framework For Video Retrieval Using CLIP paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the UniVL model in the UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the Text-Video Embedding model in the HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the RoME model in the RoME: Role-aware Mixture-of-Expert Transformer for Text-to-Video Retrieval paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the JSFusion model in the A Joint Sequence Fusion Model for Video Question Answering and Retrieval paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the Collaborative Experts model in the Use What You Have: Video Retrieval Using Representations From Collaborative Experts paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the JEMC model in the Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the Kaufman model in the Temporal Tessellation: A Unified Approach for Video Analysis paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the C+LSTM+SA+FC7 model in the Learning Language-Visual Embedding for Movie Understanding with Natural-Language paper on the MSR-VTT dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-video MedianR, text-to-videoMedian Rank
What metrics were used to measure the VAST model in the VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset paper on the VATEX dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video MedianR, text-to-video MeanR, video-to-text R@1, video-to-text R@10
What metrics were used to measure the VALOR model in the VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset paper on the VATEX dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video MedianR, text-to-video MeanR, video-to-text R@1, video-to-text R@10
What metrics were used to measure the Unmasked Teacher model in the Unmasked Teacher: Towards Training-Efficient Video Foundation Models paper on the VATEX dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video MedianR, text-to-video MeanR, video-to-text R@1, video-to-text R@10
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the VATEX dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video MedianR, text-to-video MeanR, video-to-text R@1, video-to-text R@10
What metrics were used to measure the Cap4Video model in the Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval? paper on the VATEX dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video MedianR, text-to-video MeanR, video-to-text R@1, video-to-text R@10
What metrics were used to measure the TS2-Net model in the TS2-Net: Token Shift and Selection Transformer for Text-Video Retrieval paper on the VATEX dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video MedianR, text-to-video MeanR, video-to-text R@1, video-to-text R@10
What metrics were used to measure the LAFF model in the Lightweight Attentional Feature Fusion: A New Baseline for Text-to-Video Retrieval paper on the VATEX dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video MedianR, text-to-video MeanR, video-to-text R@1, video-to-text R@10
What metrics were used to measure the QB-Norm+CLIP2Video model in the Cross Modal Retrieval with Querybank Normalisation paper on the VATEX dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video MedianR, text-to-video MeanR, video-to-text R@1, video-to-text R@10
What metrics were used to measure the CLIP2Video model in the CLIP2Video: Mastering Video-Text Retrieval via Image CLIP paper on the VATEX dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video MedianR, text-to-video MeanR, video-to-text R@1, video-to-text R@10
What metrics were used to measure the VVS3840 model in the VVS: Video-to-Video Retrieval with Irrelevant Frame Suppression paper on the FIVR-200K dataset?
mAP
What metrics were used to measure the UMT-L (ViT-L/16) model in the Unmasked Teacher: Towards Training-Efficient Video Foundation Models paper on the SSv2-template retrieval dataset?
text-to-video R@1, text-to-video R@10, text-to-video R@5
What metrics were used to measure the HiTeA model in the HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training paper on the SSv2-template retrieval dataset?
text-to-video R@1, text-to-video R@10, text-to-video R@5
What metrics were used to measure the Singularity-temporal model in the Revealing Single Frame Bias for Video-and-Language Learning paper on the SSv2-template retrieval dataset?
text-to-video R@1, text-to-video R@10, text-to-video R@5
What metrics were used to measure the TESTA (ViT-B/16) model in the TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding paper on the Condensed Movies dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10
What metrics were used to measure the VINDLU model in the VindLU: A Recipe for Effective Video-and-Language Pretraining paper on the Condensed Movies dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10
What metrics were used to measure the LF-VILA model in the Long-Form Video-Language Pre-Training with Multimodal Temporal Contrastive Learning paper on the Condensed Movies dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10