prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the HunYuan_tvr (huge) model in the Tencent Text-Video Retrieval: Hierarchical Cross-Modal Interactions with Multi-Level Representations paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the CLIP-ViP model in the CLIP-ViP: Adapting Pre-trained Image-Text Model to Video-Language Representation Alignment paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the DMAE
(ViT-B/16) model in the Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the HunYuan_tvr model in the Tencent Text-Video Retrieval: Hierarchical Cross-Modal Interactions with Multi-Level Representations paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the MuLTI model in the MuLTI: Efficient Video-and-Language Understanding with MultiWay-Sampler and Multiple Choice Modeling paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the STAN model in the Revisiting Temporal Modeling for CLIP-based Image-to-Video Knowledge Transferring paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the TS2-Net model in the TS2-Net: Token Shift and Selection Transformer for Text-Video Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the DRL model in the Disentangled Representation Learning for Text-Video Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the mPLUG-2 model in the mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the CLIP2TV model in the CLIP2TV: Align, Match and Distill for Video-Text Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the EMCL-Net++ model in the Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the Cap4Video model in the Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval? paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the SuMA (ViT-B/16) model in the Video-Text Retrieval by Supervised Sparse Multi-Grained Learning paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the X2-VLM (large) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the X-CLIP model in the X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the DiffusionRet model in the DiffusionRet: Generative Text-Video Retrieval with Diffusion Model paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the DiffusionRet+QB-Norm model in the DiffusionRet: Generative Text-Video Retrieval with Diffusion Model paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the CAMoE model in the Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the HBI model in the Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the CenterCLIP (ViT-B/16) model in the CenterCLIP: Token Clustering for Efficient Text-Video Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the X2-VLM (base) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the QB-Norm+CLIP2Video model in the Cross Modal Retrieval with Querybank Normalisation paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the X-Pool model in the X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the EMCL-Net model in the Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the HiTeA model in the HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the LAFF model in the Lightweight Attentional Feature Fusion: A New Baseline for Text-to-Video Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the CLIP2Video model in the CLIP2Video: Mastering Video-Text Retrieval via Image CLIP paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the Singularity model in the Revealing Single Frame Bias for Video-and-Language Learning paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the All-in-one + MELTR model in the MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the Clover model in the Clover: Towards A Unified Video-Language Alignment and Fusion Model paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the MDMMT model in the MDMMT: Multidomain Multimodal Transformer for Video Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the MAC model in the Masked Contrastive Pre-Training for Efficient Video-Text Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the All-in-one-B model in the All in One: Exploring Unified Video-Language Pre-training paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the BridgeFormer model in the Bridging Video-text Retrieval with Multiple Choice Questions paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the Florence model in the Florence: A New Foundation Model for Computer Vision paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the COTS model in the COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the VIOLET + MELTR model in the MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the CLIP model in the A Straightforward Framework For Video Retrieval Using CLIP paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the UniVL + MELTR model in the MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the FROZEN model in the Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the VideoCLIP model in the VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the TACo model in the TACo: Token-aware Cascade Contrastive Learning for Video-Text Alignment paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the VLM model in the VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the MMT-Pretrained model in the Multi-modal Transformer for Video Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the BridgeFormer (Zero-shot) model in the Bridging Video-text Retrieval with Multiple Choice Questions paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the MMT model in the Multi-modal Transformer for Video Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the Collaborative Experts model in the Use What You Have: Video Retrieval Using Representations From Collaborative Experts paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the HT-Pretrained model in the HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the HT model in the HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the JSFusion model in the A Joint Sequence Fusion Model for Video Question Answering and Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the CLIP4Clip model in the CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the Socratic Models model in the Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language paper on the MSR-VTT-1kA dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the UMT-L (ViT-L/16) model in the Unmasked Teacher: Towards Training-Efficient Video Foundation Models paper on the SSv2-label retrieval dataset? | text-to-video R@1, text-to-video R@10, text-to-video R@5 |
What metrics were used to measure the HiTeA model in the HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training paper on the SSv2-label retrieval dataset? | text-to-video R@1, text-to-video R@10, text-to-video R@5 |
What metrics were used to measure the Singularity-temporal model in the Revealing Single Frame Bias for Video-and-Language Learning paper on the SSv2-label retrieval dataset? | text-to-video R@1, text-to-video R@10, text-to-video R@5 |
What metrics were used to measure the X-CLIP (Cross-Lingual) model in the MSVD-Indonesian: A Benchmark for Multimodal Video-Text Tasks in Indonesian paper on the MSVD-Indonesian dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the HunYuan_tvr (huge) model in the Tencent Text-Video Retrieval: Hierarchical Cross-Modal Interactions with Multi-Level Representations paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the HunYuan_tvr model in the Tencent Text-Video Retrieval: Hierarchical Cross-Modal Interactions with Multi-Level Representations paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the VLAB model in the VLAB: Enhancing Video Language Pre-training by Feature Adapting and Blending paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the MDMMT-2 model in the MDMMT-2: Multidomain Multimodal Transformer for Video Retrieval, One More Step Towards Generalization paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the CAMoE model in the Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the Cap4Video model in the Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval? paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the CenterCLIP (ViT-B/16) model in the CenterCLIP: Token Clustering for Efficient Text-Video Retrieval paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the X-CLIP model in the X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the DMAE
(ViT-B/32) model in the Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the QB-Norm+CLIP2Video model in the Cross Modal Retrieval with Querybank Normalisation paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the DiffusionRet+QB-Norm model in the DiffusionRet: Generative Text-Video Retrieval with Diffusion Model paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the X-Pool model in the X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the DiffusionRet model in the DiffusionRet: Generative Text-Video Retrieval with Diffusion Model paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the CLIP4Clip model in the CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the LAFF model in the Lightweight Attentional Feature Fusion: A New Baseline for Text-to-Video Retrieval paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the CLIP model in the A Straightforward Framework For Video Retrieval Using CLIP paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the FROZEN model in the Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the SSML model in the Noise Estimation Using Density Estimation for Self-Supervised Multimodal Learning paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the Collaborative Experts model in the Use What You Have: Video Retrieval Using Representations From Collaborative Experts paper on the MSVD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank, text-to-video R@50, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank |
What metrics were used to measure the VAST model in the VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the VALOR model in the VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the UMT-L (ViT-L/16) model in the Unmasked Teacher: Towards Training-Efficient Video Foundation Models paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the CLIP-ViP model in the CLIP-ViP: Adapting Pre-trained Image-Text Model to Video-Language Representation Alignment paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the HunYuan_tvr model in the Tencent Text-Video Retrieval: Hierarchical Cross-Modal Interactions with Multi-Level Representations paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the TESTA (ViT-B/16) model in the TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the DMAE
(ViT-B/32) model in the Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the CAMoE model in the Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the EMCL-Net++ model in the Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the HiTeA model in the HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the DiffusionRet+QB-Norm model in the DiffusionRet: Generative Text-Video Retrieval with Diffusion Model paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the Singularity model in the Revealing Single Frame Bias for Video-and-Language Learning paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the CenterCLIP (ViT-B/16) model in the CenterCLIP: Token Clustering for Efficient Text-Video Retrieval paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the X-CLIP model in the X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the DiffusionRet model in the DiffusionRet: Generative Text-Video Retrieval with Diffusion Model paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the HBI model in the Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the EMCL-Net model in the Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the CLIP4Clip model in the CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the TACo model in the TACo: Token-aware Cascade Contrastive Learning for Video-Text Alignment paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the MMT-Pretrained model in the Multi-modal Transformer for Video Retrieval paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the HD-VILA model in the Advancing High-Resolution Video-Language Representation with Large-Scale Video Transcriptions paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the Ours model in the Video and Text Matching with Conditioned Embeddings paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the MMT model in the Multi-modal Transformer for Video Retrieval paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.