prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the VideoCoCa model in the VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the VAST model in the VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the VALOR model in the VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the UMT-L (ViT-L/16) model in the Unmasked Teacher: Towards Training-Efficient Video Foundation Models paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the TESTA (ViT-B/16) model in the TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the FrozenBiLM+ model in the Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the Singularity-temporal model in the Revealing Single Frame Bias for Video-and-Language Learning paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the Singularity model in the Revealing Single Frame Bias for Video-and-Language Learning paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the Text + Text (no Multimodal Pretext Training) model in the Towards Fast Adaptation of Pretrained Contrastive Models for Multi-channel Video-Language Retrieval paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the All-in-one+ model in the Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the VIOLET+ model in the Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the Just Ask model in the Just Ask: Learning to Answer Questions from Millions of Narrated Videos paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the E-SA model in the ActivityNet-QA: A Dataset for Understanding Complex Web Videos via Question Answering paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the E-MN model in the ActivityNet-QA: A Dataset for Understanding Complex Web Videos via Question Answering paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the E-VQA model in the ActivityNet-QA: A Dataset for Understanding Complex Web Videos via Question Answering paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the GPT-2 + CLIP-14 + CLIP-multilingual (Zero-Shot) model in the Composing Ensembles of Pre-trained Models via Iterative Consensus paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the GPT-2 + CLIP-32 (Zero-Shot) model in the Composing Ensembles of Pre-trained Models via Iterative Consensus paper on the ActivityNet-QA dataset? | Accuracy, Vocabulary Size, Accuracy (Amazon Mechanical Turk) |
What metrics were used to measure the LLaMA-VQA model in the Large Language Models are Temporal and Causal Reasoners for Video Question Answering paper on the VLEP dataset? | Accuracy |
What metrics were used to measure the LLaMA-VQA model in the Large Language Models are Temporal and Causal Reasoners for Video Question Answering paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the SeViLA model in the Self-Chained Image-Language Model for Video Localization and Question Answering paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the SeViLA (0-shot) model in the Self-Chained Image-Language Model for Video Localization and Question Answering paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the HiTeA model in the HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the CoVGT(PT) model in the Contrastive Video Question Answering via Video Graph Transformer paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the SeViT model in the Semi-Parametric Video-Grounded Text Generation paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the ViperGPT(0-shot) model in the ViperGPT: Visual Inference via Python Execution for Reasoning paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the CoVGT model in the Contrastive Video Question Answering via Video Graph Transformer paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the VFC model in the Verbs in Action: Improving verb understanding in video-language models paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the ATM model in the ATM: Action Temporality Modeling for Video Question Answering paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the MIST model in the MIST: Multi-modal Iterative Spatial-Temporal Transformer for Long-form Video Question Answering paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the VGT(PT) model in the Video Graph Transformer for Video Question Answering paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the PAXION model in the Paxion: Patching Action Knowledge in Video-Language Foundation Models paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the VGT model in the Video Graph Transformer for Video Question Answering paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the ATP model in the Revisiting the "Video" in Video-Language Understanding paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the P3D-G model in the (2.5+1)D Spatio-Temporal Scene Graphs for Video Question Answering paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the HQGA model in the Video as Conditional Graph Hierarchy for Multi-Granular Question Answering paper on the NExT-QA dataset? | Accuracy |
What metrics were used to measure the model in the On the hidden treasure of dialog in video question answering paper on the KnowIT VQA dataset? | Accuracy |
What metrics were used to measure the VAST model in the VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset paper on the MSRVTT-QA dataset? | Accuracy |
What metrics were used to measure the VALOR model in the VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset paper on the MSRVTT-QA dataset? | Accuracy |
What metrics were used to measure the mPLUG-2 model in the mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video paper on the MSRVTT-QA dataset? | Accuracy |
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the MSRVTT-QA dataset? | Accuracy |
What metrics were used to measure the HBI model in the Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning paper on the MSRVTT-QA dataset? | Accuracy |
What metrics were used to measure the EMCL-Net model in the Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations paper on the MSRVTT-QA dataset? | Accuracy |
What metrics were used to measure the VIOLETv2 model in the An Empirical Study of End-to-End Video-Language Transformers with Masked Visual Modeling paper on the MSRVTT-QA dataset? | Accuracy |
What metrics were used to measure the Singularity-temporal model in the Revealing Single Frame Bias for Video-and-Language Learning paper on the MSRVTT-QA dataset? | Accuracy |
What metrics were used to measure the Singularity model in the Revealing Single Frame Bias for Video-and-Language Learning paper on the MSRVTT-QA dataset? | Accuracy |
What metrics were used to measure the ATP (1<-16) model in the Revisiting the "Video" in Video-Language Understanding paper on the MSR-VTT-MC dataset? | Accuracy |
What metrics were used to measure the Text + Text (no Multimodal Pretext Training) model in the Towards Fast Adaptation of Pretrained Contrastive Models for Multi-channel Video-Language Retrieval paper on the How2QA dataset? | Accuracy |
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the How2QA dataset? | Accuracy |
What metrics were used to measure the Just Ask model in the Just Ask: Learning to Answer Questions from Millions of Narrated Videos paper on the How2QA dataset? | Accuracy |
What metrics were used to measure the SeViLA model in the Self-Chained Image-Language Model for Video Localization and Question Answering paper on the How2QA dataset? | Accuracy |
What metrics were used to measure the Hero w/ pre-training model in the HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training paper on the How2QA dataset? | Accuracy |
What metrics were used to measure the ATP model in the Revisiting the "Video" in Video-Language Understanding paper on the How2QA dataset? | Accuracy |
What metrics were used to measure the LLaMA-VQA model in the Large Language Models are Temporal and Causal Reasoners for Video Question Answering paper on the STAR: Situated Reasoning dataset? | Average Accuracy |
What metrics were used to measure the SeViLA model in the Self-Chained Image-Language Model for Video Localization and Question Answering paper on the STAR: Situated Reasoning dataset? | Average Accuracy |
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the STAR: Situated Reasoning dataset? | Average Accuracy |
What metrics were used to measure the MIST model in the MIST: Multi-modal Iterative Spatial-Temporal Transformer for Long-form Video Question Answering paper on the STAR: Situated Reasoning dataset? | Average Accuracy |
What metrics were used to measure the Temp[ATP] model in the Revisiting the "Video" in Video-Language Understanding paper on the STAR: Situated Reasoning dataset? | Average Accuracy |
What metrics were used to measure the AnyMAL-70B (0-shot) model in the AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model paper on the STAR: Situated Reasoning dataset? | Average Accuracy |
What metrics were used to measure the All-in-one model in the All in One: Exploring Unified Video-Language Pre-training paper on the STAR: Situated Reasoning dataset? | Average Accuracy |
What metrics were used to measure the SeViLA (0-shot) model in the Self-Chained Image-Language Model for Video Localization and Question Answering paper on the STAR: Situated Reasoning dataset? | Average Accuracy |
What metrics were used to measure the Flamingo-9B (4-shot) model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the STAR: Situated Reasoning dataset? | Average Accuracy |
What metrics were used to measure the Flamingo-80B (4-shot) model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the STAR: Situated Reasoning dataset? | Average Accuracy |
What metrics were used to measure the Flamingo-9B (0-shot) model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the STAR: Situated Reasoning dataset? | Average Accuracy |
What metrics were used to measure the Flamingo-80B (0-shot) model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the STAR: Situated Reasoning dataset? | Average Accuracy |
What metrics were used to measure the LLaMA-VQA model in the Large Language Models are Temporal and Causal Reasoners for Video Question Answering paper on the TVQA dataset? | Accuracy |
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the TVQA dataset? | Accuracy |
What metrics were used to measure the iPerceive (Chadha et al., 2020) model in the iPerceive: Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering paper on the TVQA dataset? | Accuracy |
What metrics were used to measure the Hero w/ pre-training model in the HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training paper on the TVQA dataset? | Accuracy |
What metrics were used to measure the STAGE (Lei et al., 2019) model in the TVQA+: Spatio-Temporal Grounding for Video Question Answering paper on the TVQA dataset? | Accuracy |
What metrics were used to measure the LLaMA-VQA model in the Large Language Models are Temporal and Causal Reasoners for Video Question Answering paper on the DramaQA dataset? | Accuracy |
What metrics were used to measure the Clover model in the Clover: Towards A Unified Video-Language Alignment and Fusion Model paper on the LSMDC-FiB dataset? | Accuracy |
What metrics were used to measure the VIOLETv2 model in the An Empirical Study of End-to-End Video-Language Transformers with Masked Visual Modeling paper on the LSMDC-MC dataset? | Accuracy |
What metrics were used to measure the Clover model in the Clover: Towards A Unified Video-Language Alignment and Fusion Model paper on the LSMDC-MC dataset? | Accuracy |
What metrics were used to measure the EHR-Graph Transformer (pre-trained) model in the Unsupervised Pre-Training on Patient Population Graphs for Patient-Level Predictions paper on the MIMIC-III dataset? | Accuracy (LOS>3 Days), Accuracy (LOS>7 Days) |
What metrics were used to measure the EHR-Graph Transformer model in the Unsupervised Pre-Training on Patient Population Graphs for Patient-Level Predictions paper on the MIMIC-III dataset? | Accuracy (LOS>3 Days), Accuracy (LOS>7 Days) |
What metrics were used to measure the Random Forests (RF) model in the MIMIC-Extract: A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III paper on the MIMIC-III dataset? | Accuracy (LOS>3 Days), Accuracy (LOS>7 Days) |
What metrics were used to measure the Logistic Regression (LR) model in the MIMIC-Extract: A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III paper on the MIMIC-III dataset? | Accuracy (LOS>3 Days), Accuracy (LOS>7 Days) |
What metrics were used to measure the GRU-D model in the MIMIC-Extract: A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III paper on the MIMIC-III dataset? | Accuracy (LOS>3 Days), Accuracy (LOS>7 Days) |
What metrics were used to measure the CORe model in the Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration paper on the Clinical Admission Notes from MIMIC-III dataset? | AUROC |
What metrics were used to measure the BioBERT Base model in the Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration paper on the Clinical Admission Notes from MIMIC-III dataset? | AUROC |
What metrics were used to measure the BERT Base model in the Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration paper on the Clinical Admission Notes from MIMIC-III dataset? | AUROC |
What metrics were used to measure the SRLA model in the Hierarchical Framework for Interpretable and Probabilistic Model-Based Safe Reinforcement Learning paper on the NASA C-MAPSS dataset? | Average Remaining Cycles |
What metrics were used to measure the CSTrack model in the Rethinking the competition between detection and ReID in Multi-Object Tracking paper on the HiEve dataset? | MOTA, IDF1 |
What metrics were used to measure the SGT model in the Detection Recovery in Online Multi-Object Tracking with Sparse Graph Tracker paper on the HiEve dataset? | MOTA, IDF1 |
What metrics were used to measure the FairMOT model in the FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking paper on the HiEve dataset? | MOTA, IDF1 |
What metrics were used to measure the JDE model in the Towards Real-Time Multi-Object Tracking paper on the HiEve dataset? | MOTA, IDF1 |
What metrics were used to measure the ByteTrack model in the Large Scale Real-World Multi-Person Tracking paper on the PersonPath22 dataset? | IDF1, MOTA |
What metrics were used to measure the FairMOT model in the Large Scale Real-World Multi-Person Tracking paper on the PersonPath22 dataset? | IDF1, MOTA |
What metrics were used to measure the SiamMOT model in the Large Scale Real-World Multi-Person Tracking paper on the PersonPath22 dataset? | IDF1, MOTA |
What metrics were used to measure the CenterTrack model in the Large Scale Real-World Multi-Person Tracking paper on the PersonPath22 dataset? | IDF1, MOTA |
What metrics were used to measure the PPTracking model in the PP-YOLOE: An evolved version of YOLO paper on the MOT16 dataset? | MOTA, IDF1, IDs |
What metrics were used to measure the ReMOT model in the ReMOTS: Self-Supervised Refining Multi-Object Tracking and Segmentation paper on the MOT16 dataset? | MOTA, IDF1, IDs |
What metrics were used to measure the SGT model in the Detection Recovery in Online Multi-Object Tracking with Sparse Graph Tracker paper on the MOT16 dataset? | MOTA, IDF1, IDs |
What metrics were used to measure the STGT model in the TransMOT: Spatial-Temporal Graph Transformer for Multiple Object Tracking paper on the MOT16 dataset? | MOTA, IDF1, IDs |
What metrics were used to measure the FairMOT model in the FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking paper on the MOT16 dataset? | MOTA, IDF1, IDs |
What metrics were used to measure the UniTrack model in the Do Different Tracking Tasks Require Different Appearance Models? paper on the MOT16 dataset? | MOTA, IDF1, IDs |
What metrics were used to measure the OUTrack_fm model in the Online Multi-Object Tracking with Unsupervised Re-Identification Learning and Occlusion Estimation paper on the MOT16 dataset? | MOTA, IDF1, IDs |
What metrics were used to measure the LMOT model in the LMOT: Efficient Light-Weight Detection and Tracking in Crowds paper on the MOT16 dataset? | MOTA, IDF1, IDs |
What metrics were used to measure the TraDeS model in the Track to Detect and Segment: An Online Multi-Object Tracker paper on the MOT16 dataset? | MOTA, IDF1, IDs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.