prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the X2-VLM (base) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the PTP-BLIP (14M) model in the Position-guided Text Prompt for Vision-Language Pre-training paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the OmniVL (14M) model in the OmniVL:One Foundation Model for Image-Language and Video-Language Tasks paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the VSE-Gradient model in the Dissecting Deep Metric Learning Losses for Image-Text Retrieval paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the X-VLM (base) model in the Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the Florence model in the Florence: A New Foundation Model for Computer Vision paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the Aurora (ours, r=128) model in the Parameter-efficient Tuning of Large-scale Multimodal Foundation Model paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the VALOR model in the VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the ALBEF model in the Align before Fuse: Vision and Language Representation Learning with Momentum Distillation paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the ALIGN model in the Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the ERNIE-ViL 2.0 model in the ERNIE-ViL 2.0: Multi-view Contrastive Learning for Image-Text Pre-training paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the TCL model in the Vision-Language Pre-Training with Triple Contrastive Learning paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the Oscar model in the Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the METER model in the An Empirical Study of Training End-to-End Vision-and-Language Transformers paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the ViSTA model in the ViSTA: Vision and Scene Text Aggregation for Cross-Modal Retrieval paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the ALADIN model in the ALADIN: Distilling Fine-grained Alignment Scores for Efficient Image-Text Matching and Retrieval paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the VisualSparta model in the VisualSparta: An Embarrassingly Simple Approach to Large-scale Text-to-Image Search with Weighted Bag-of-words paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the RCAR model in the Plug-and-Play Regulators for Image-Text Matching paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the NAPReg model in the NAPReg: Nouns As Proxies Regularization for Semantically Aware Cross-Modal Embeddings paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the ViLT-B/32 model in the ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the SGRAF model in the Similarity Reasoning and Filtration for Image-Text Matching paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the LILE model in the LILE: Look In-Depth before Looking Elsewhere -- A Dual Attention Network using Transformers for Cross-Modal Information Retrieval in Histopathology Archives paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the VSRN model in the Visual Semantic Reasoning for Image-Text Matching paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the IMRAM model in the IMRAM: Iterative Matching with Recurrent Attention Memory for Cross-Modal Image-Text Retrieval paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the SCAN model in the Stacked Cross Attention for Image-Text Matching paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the SCO (ResNet) model in the Learning Semantic Concepts and Order for Image and Sentence Matching paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the PVSE model in the Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the Dual-Path (ResNet) model in the Deep Visual-Semantic Alignments for Generating Image Descriptions paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the MaMMUT (ours) model in the MaMMUT: A Simple Architecture for Joint Learning for MultiModal Tasks paper on the COCO 2014 dataset? | Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10 |
What metrics were used to measure the AMAN model in the Adversarial Modality Alignment Network for Cross-Modal Molecule Retrieval paper on the ChEBI-20 dataset? | Hits@1, Hits@10, Mean Rank, Test MRR |
What metrics were used to measure the All-Ensemble model in the Text2Mol: Cross-Modal Molecule Retrieval with Natural Language Queries paper on the ChEBI-20 dataset? | Hits@1, Hits@10, Mean Rank, Test MRR |
What metrics were used to measure the MLP1 model in the Text2Mol: Cross-Modal Molecule Retrieval with Natural Language Queries paper on the ChEBI-20 dataset? | Hits@1, Hits@10, Mean Rank, Test MRR |
What metrics were used to measure the GCN2 model in the Text2Mol: Cross-Modal Molecule Retrieval with Natural Language Queries paper on the ChEBI-20 dataset? | Hits@1, Hits@10, Mean Rank, Test MRR |
What metrics were used to measure the GeoCLAP model in the Learning Tri-modal Embeddings for Zero-Shot Soundscape Mapping paper on the SoundingEarth dataset? | Median Rank, Image-to-sound R@100, Sound-to-image R@100 |
What metrics were used to measure the ResNet-18 model in the Self-supervised Audiovisual Representation Learning for Remote Sensing Data paper on the SoundingEarth dataset? | Median Rank, Image-to-sound R@100, Sound-to-image R@100 |
What metrics were used to measure the Dual Path model in the Dual-Path Convolutional Image-Text Embeddings with Instance Loss paper on the CUHK-PEDES dataset? | Text-to-image Medr |
What metrics were used to measure the CoVR-BLIP model in the CoVR: Learning Composed Video Retrieval from Web Video Captions paper on the CIRR dataset? | (Recall@5+Recall_subset@1)/2 |
What metrics were used to measure the DNABERT-2-117M model in the DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genome paper on the GUE dataset? | MCC |
What metrics were used to measure the DNABERT-2-117M model in the DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genome paper on the GUE dataset? | MCC |
What metrics were used to measure the DNABERT-2-117M model in the DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genome paper on the GUE dataset? | MCC |
What metrics were used to measure the DNABERT-2-117M model in the DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genome paper on the GUE dataset? | Avg F1 |
What metrics were used to measure the DNABERT-2-117M model in the DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genome paper on the GUE dataset? | MCC |
What metrics were used to measure the MedVInT model in the PMC-VQA: Visual Instruction Tuning for Medical Visual Question Answering paper on the PMC-VQA dataset? | BLEU-1 |
What metrics were used to measure the BLIP-2 model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the PMC-VQA dataset? | BLEU-1 |
What metrics were used to measure the Open-Flamingo model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the PMC-VQA dataset? | BLEU-1 |
What metrics were used to measure the BT-Adapter model in the One For All: Video Conversation is Feasible Without Video Instruction Tuning paper on the VideoInstruct dataset? | gpt-score |
What metrics were used to measure the Video-ChatGPT model in the Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models paper on the VideoInstruct dataset? | gpt-score |
What metrics were used to measure the Video Chat model in the VideoChat: Chat-Centric Video Understanding paper on the VideoInstruct dataset? | gpt-score |
What metrics were used to measure the BT-Adapter (zero-shot) model in the One For All: Video Conversation is Feasible Without Video Instruction Tuning paper on the VideoInstruct dataset? | gpt-score |
What metrics were used to measure the LLaMA Adapter model in the LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model paper on the VideoInstruct dataset? | gpt-score |
What metrics were used to measure the Video LLaMA model in the Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding paper on the VideoInstruct dataset? | gpt-score |
What metrics were used to measure the BT-Adapter model in the One For All: Video Conversation is Feasible Without Video Instruction Tuning paper on the VideoInstruct dataset? | mean, Correctness of Information, Detail Orientation, Contextual Understanding, Temporal Understanding, Consistency |
What metrics were used to measure the BT-Adapter (zero-shot) model in the One For All: Video Conversation is Feasible Without Video Instruction Tuning paper on the VideoInstruct dataset? | mean, Correctness of Information, Detail Orientation, Contextual Understanding, Temporal Understanding, Consistency |
What metrics were used to measure the Video-ChatGPT model in the Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models paper on the VideoInstruct dataset? | mean, Correctness of Information, Detail Orientation, Contextual Understanding, Temporal Understanding, Consistency |
What metrics were used to measure the Video Chat model in the VideoChat: Chat-Centric Video Understanding paper on the VideoInstruct dataset? | mean, Correctness of Information, Detail Orientation, Contextual Understanding, Temporal Understanding, Consistency |
What metrics were used to measure the LLaMA Adapter model in the LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model paper on the VideoInstruct dataset? | mean, Correctness of Information, Detail Orientation, Contextual Understanding, Temporal Understanding, Consistency |
What metrics were used to measure the Video LLaMA model in the Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding paper on the VideoInstruct dataset? | mean, Correctness of Information, Detail Orientation, Contextual Understanding, Temporal Understanding, Consistency |
What metrics were used to measure the BigGAN + gSR model in the Improving GANs for Long-Tailed Data through Group Spectral Regularization paper on the CIFAR-10 LT dataset? | FID |
What metrics were used to measure the StyleGAN2 + ADA model in the Training Generative Adversarial Networks with Limited Data paper on the ArtBench-10 (32x32) dataset? | FID |
What metrics were used to measure the ReACGAN + DiffAug model in the Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training paper on the ArtBench-10 (32x32) dataset? | FID |
What metrics were used to measure the BigGAN + DiffAug model in the Large Scale GAN Training for High Fidelity Natural Image Synthesis paper on the ArtBench-10 (32x32) dataset? | FID |
What metrics were used to measure the StyleGAN2 model in the Analyzing and Improving the Image Quality of StyleGAN paper on the ArtBench-10 (32x32) dataset? | FID |
What metrics were used to measure the BigGAN + CR model in the Consistency Regularization for Generative Adversarial Networks paper on the ArtBench-10 (32x32) dataset? | FID |
What metrics were used to measure the Projected GAN model in the Projected GANs Converge Faster paper on the ArtBench-10 (32x32) dataset? | FID |
What metrics were used to measure the StyleGAN2 + NoisyTwins model in the NoisyTwins: Class-Consistent and Diverse Image Generation through StyleGANs paper on the ImageNet-LT dataset? | FID |
What metrics were used to measure the ADM-G + EDS (ED-DPM, classifier_scale=0.75) model in the Entropy-driven Sampling and Training Scheme for Conditional Diffusion Generation paper on the ImageNet 256x256 dataset? | FID, Inception score |
What metrics were used to measure the ADM-G + EDS + ECT (ED-DPM, classifier_scale=1.0) model in the Entropy-driven Sampling and Training Scheme for Conditional Diffusion Generation paper on the ImageNet 256x256 dataset? | FID, Inception score |
What metrics were used to measure the ADM-G model in the Diffusion Models Beat GANs on Image Synthesis paper on the ImageNet 256x256 dataset? | FID, Inception score |
What metrics were used to measure the BigGAN+ [Brock et al.] (chx96) model in the Instance-Conditioned GAN paper on the ImageNet 256x256 dataset? | FID, Inception score |
What metrics were used to measure the IC-GAN (chx96) + DA model in the Instance-Conditioned GAN paper on the ImageNet 256x256 dataset? | FID, Inception score |
What metrics were used to measure the FQ-GAN model in the Feature Quantization Improves GAN Training paper on the CIFAR-100 dataset? | FID, Inception Score, Intra-FID |
What metrics were used to measure the TAC-GAN model in the Twin Auxiliary Classifiers GAN paper on the CIFAR-100 dataset? | FID, Inception Score, Intra-FID |
What metrics were used to measure the ADC-GAN model in the Conditional GANs with Auxiliary Discriminative Classifier paper on the CIFAR-100 dataset? | FID, Inception Score, Intra-FID |
What metrics were used to measure the aw-BigGAN model in the Adaptive Weighted Discriminator for Training Generative Adversarial Networks paper on the CIFAR-100 dataset? | FID, Inception Score, Intra-FID |
What metrics were used to measure the aw-SN-GAN model in the Adaptive Weighted Discriminator for Training Generative Adversarial Networks paper on the CIFAR-100 dataset? | FID, Inception Score, Intra-FID |
What metrics were used to measure the MHingeGAN model in the cGANs with Multi-Hinge Loss paper on the CIFAR-100 dataset? | FID, Inception Score, Intra-FID |
What metrics were used to measure the U-Net GAN model in the A U-Net Based Discriminator for Generative Adversarial Networks paper on the COCO-Animals dataset? | FID, IS |
What metrics were used to measure the BigGAN model in the A U-Net Based Discriminator for Generative Adversarial Networks paper on the COCO-Animals dataset? | FID, IS |
What metrics were used to measure the EluCD_DDPM model in the Elucidating The Design Space of Classifier-Guided Diffusion Generation paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the ADM-G + EDS (ED-DPM, classifier_scale=0.4) model in the Entropy-driven Sampling and Training Scheme for Conditional Diffusion Generation paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the ADM-G + EDS + ECT (ED-DPM, classifier_scale=0.6) model in the Entropy-driven Sampling and Training Scheme for Conditional Diffusion Generation paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the simple diffusion (U-Net) model in the simple diffusion: End-to-end diffusion for high resolution images paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the ADM-G (classifier_scale=0.5) model in the Diffusion Models Beat GANs on Image Synthesis paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the simple diffusion (U-ViT, L) model in the simple diffusion: End-to-end diffusion for high resolution images paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the BigGAN-deep model in the Large Scale GAN Training for High Fidelity Natural Image Synthesis paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the Omni-INR-GAN model in the Omni-GAN: On the Secrets of cGANs and Beyond paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the CR-BigGAN model in the Consistency Regularization for Generative Adversarial Networks paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the S3 GAN model in the High-Fidelity Image Generation With Fewer Labels paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the ADC-GAN model in the Conditional GANs with Auxiliary Discriminative Classifier paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the ReACGAN model in the Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the Omni-GAN model in the Omni-GAN: On the Secrets of cGANs and Beyond paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the BigGAN model in the Large Scale GAN Training for High Fidelity Natural Image Synthesis paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the IC-GAN + DA model in the Instance-Conditioned GAN paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the BigGAN + instance selection model in the Instance Selection for GANs paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the FQ-GAN model in the Feature Quantization Improves GAN Training paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the HamGAN model in the Is Attention Better Than Matrix Decomposition? paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the Your Local GAN model in the Your Local GAN: Designing Two Dimensional Local Attention Mechanisms for Generative Models paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the SAGAN model in the Self-Attention Generative Adversarial Networks paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the Projection Discriminator model in the cGANs with Projection Discriminator paper on the ImageNet 128x128 dataset? | FID, Inception score |
What metrics were used to measure the AC-GAN model in the Conditional Image Synthesis With Auxiliary Classifier GANs paper on the ImageNet 128x128 dataset? | FID, Inception score |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.