prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Image-BART model in the ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis paper on the Conceptual Captions dataset? | FID |
What metrics were used to measure the VQ-GAN model in the Taming Transformers for High-Resolution Image Synthesis paper on the Conceptual Captions dataset? | FID |
What metrics were used to measure the Swinv2-Imagen model in the Swinv2-Imagen: Hierarchical Vision Transformer Diffusion Models for Text-to-Image Generation paper on the Multi-Modal-CelebA-HQ dataset? | FID, LPIPS, Acc, Real |
What metrics were used to measure the Lafite model in the LAFITE: Towards Language-Free Training for Text-to-Image Generation paper on the Multi-Modal-CelebA-HQ dataset? | FID, LPIPS, Acc, Real |
What metrics were used to measure the Corgi model in the Shifted Diffusion for Text-to-image Generation paper on the Multi-Modal-CelebA-HQ dataset? | FID, LPIPS, Acc, Real |
What metrics were used to measure the Unite and Conquer model in the Unite and Conquer: Plug & Play Multi-Modal Synthesis using Diffusion Models paper on the Multi-Modal-CelebA-HQ dataset? | FID, LPIPS, Acc, Real |
What metrics were used to measure the TediGAN-B model in the Towards Open-World Text-Guided Face Image Generation and Manipulation paper on the Multi-Modal-CelebA-HQ dataset? | FID, LPIPS, Acc, Real |
What metrics were used to measure the TediGAN-A model in the TediGAN: Text-Guided Diverse Face Image Generation and Manipulation paper on the Multi-Modal-CelebA-HQ dataset? | FID, LPIPS, Acc, Real |
What metrics were used to measure the ControlGAN model in the Controllable Text-to-Image Generation paper on the Multi-Modal-CelebA-HQ dataset? | FID, LPIPS, Acc, Real |
What metrics were used to measure the AttnGAN model in the AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks paper on the Multi-Modal-CelebA-HQ dataset? | FID, LPIPS, Acc, Real |
What metrics were used to measure the DM-GAN model in the DM-GAN: Dynamic Memory Generative Adversarial Networks for Text-to-Image Synthesis paper on the Multi-Modal-CelebA-HQ dataset? | FID, LPIPS, Acc, Real |
What metrics were used to measure the DFGAN model in the DF-GAN: A Simple and Effective Baseline for Text-to-Image Synthesis paper on the Multi-Modal-CelebA-HQ dataset? | FID, LPIPS, Acc, Real |
What metrics were used to measure the TLDM model in the Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial Auto-Encoders paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the Swinv2-Imagen model in the Swinv2-Imagen: Hierarchical Vision Transformer Diffusion Models for Text-to-Image Generation paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the GALIP model in the GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the RAT-GAN model in the Recurrent Affine Transformation for Text-to-image Synthesis paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the VQ-Diffusion-F model in the Vector Quantized Diffusion Model for Text-to-Image Synthesis paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the Lafite model in the LAFITE: Towards Language-Free Training for Text-to-Image Generation paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the VQ-Diffusion-B model in the Vector Quantized Diffusion Model for Text-to-Image Synthesis paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the VQ-Diffusion-S model in the Vector Quantized Diffusion Model for Text-to-Image Synthesis paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the DM-GAN+CL model in the Improving Text-to-Image Synthesis Using Contrastive Learning paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the StackGAN-v2 model in the StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the AttnGAN+CL model in the Improving Text-to-Image Synthesis Using Contrastive Learning paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the StackGAN-v1 model in the StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the GAWWN model in the Learning What and Where to Draw paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the DF-GAN model in the DF-GAN: A Simple and Effective Baseline for Text-to-Image Synthesis paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the DM-GAN model in the DM-GAN: Dynamic Memory Generative Adversarial Networks for Text-to-Image Synthesis paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the Attention-driven Generator (perceptual loss) model in the Controllable Text-to-Image Generation paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the MirrorGAN model in the MirrorGAN: Learning Text-to-image Generation by Redescription paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the AttnGAN model in the AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the StackGAN model in the StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks paper on the CUB dataset? | FID, Inception score |
What metrics were used to measure the VQ-Diffusion-F model in the Vector Quantized Diffusion Model for Text-to-Image Synthesis paper on the Oxford 102 Flowers dataset? | FID, Inception score |
What metrics were used to measure the VQ-Diffusion-B model in the Vector Quantized Diffusion Model for Text-to-Image Synthesis paper on the Oxford 102 Flowers dataset? | FID, Inception score |
What metrics were used to measure the VQ-Diffusion-S model in the Vector Quantized Diffusion Model for Text-to-Image Synthesis paper on the Oxford 102 Flowers dataset? | FID, Inception score |
What metrics were used to measure the RAT-GAN model in the Recurrent Affine Transformation for Text-to-image Synthesis paper on the Oxford 102 Flowers dataset? | FID, Inception score |
What metrics were used to measure the StackGAN-v2 model in the StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks paper on the Oxford 102 Flowers dataset? | FID, Inception score |
What metrics were used to measure the StackGAN-v1 model in the StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks paper on the Oxford 102 Flowers dataset? | FID, Inception score |
What metrics were used to measure the StackGAN model in the StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks paper on the Oxford 102 Flowers dataset? | FID, Inception score |
What metrics were used to measure the LatteGAN model in the LatteGAN: Visually Guided Language Attention for Multi-Turn Text-Conditioned Image Manipulation paper on the GeNeVA (CoDraw) dataset? | F1-score, rsim |
What metrics were used to measure the GeNeVA-GAN model in the Tell, Draw, and Repeat: Generating and Modifying Images Based on Continual Linguistic Instruction paper on the GeNeVA (CoDraw) dataset? | F1-score, rsim |
What metrics were used to measure the LatteGAN model in the LatteGAN: Visually Guided Language Attention for Multi-Turn Text-Conditioned Image Manipulation paper on the GeNeVA (i-CLEVR) dataset? | F1-score, rsim |
What metrics were used to measure the GeNeVA-GAN model in the Tell, Draw, and Repeat: Generating and Modifying Images Based on Continual Linguistic Instruction paper on the GeNeVA (i-CLEVR) dataset? | F1-score, rsim |
What metrics were used to measure the Parti Finetuned model in the Scaling Autoregressive Models for Content-Rich Text-to-Image Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the CM3Leon-7B model in the Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Re-Imagen (Finetuned) model in the Re-Imagen: Retrieval-Augmented Text-to-Image Generator paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the U-ViT-S/2-Deep model in the All are Worth Words: A ViT Backbone for Diffusion Models paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the GLIGEN (fine-tuned, Detection + Caption data) model in the GLIGEN: Open-Set Grounded Text-to-Image Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the GLIGEN (fine-tuned, Detection data only) model in the GLIGEN: Open-Set Grounded Text-to-Image Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the U-ViT-S/2 model in the All are Worth Words: A ViT Backbone for Diffusion Models paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the TLDM model in the Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial Auto-Encoders paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the GLIGEN (fine-tuned, Grounding data) model in the GLIGEN: Open-Set Grounded Text-to-Image Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the RAPHAEL (zero-shot) model in the RAPHAEL: Text-to-Image Generation via Large Mixture of Diffusion Paths paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the ERNIE-ViLG 2.0 (zero-shot) model in the ERNIE-ViLG 2.0: Improving Text-to-Image Diffusion Model with Knowledge-Enhanced Mixture-of-Denoising-Experts paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Re-Imagen model in the Re-Imagen: Retrieval-Augmented Text-to-Image Generator paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the eDiff-I (zero-shot) model in the eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Swinv2-Imagen model in the Swinv2-Imagen: Hierarchical Vision Transformer Diffusion Models for Text-to-Image Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Parti model in the Scaling Autoregressive Models for Content-Rich Text-to-Image Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Imagen (zero-shot) model in the Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the GigaGAN (Zero-shot, 64x64) model in the Scaling up GANs for Text-to-Image Synthesis paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the StyleGAN-T (Zero-shot, 64x64) model in the StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Make-a-Scene (unfiltered) model in the Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Muse-3B (zero-shot) model in the Muse: Text-To-Image Generation via Masked Generative Transformers paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Kandinsky model in the Kandinsky: an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Lafite model in the LAFITE: Towards Language-Free Training for Text-to-Image Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the simple diffusion (U-ViT) model in the simple diffusion: End-to-end diffusion for high resolution images paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the GigaGAN (Zero-shot, 256x256) model in the Scaling up GANs for Text-to-Image Synthesis paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the XMC-GAN (256 x 256) model in the NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the XMC-GAN model in the Cross-Modal Contrastive Learning for Text-to-Image Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the DALL-E 2 model in the Hierarchical Text-Conditional Image Generation with CLIP Latents paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Corgi-Semi model in the Shifted Diffusion for Text-to-image Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Corgi model in the Shifted Diffusion for Text-to-image Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the TR0N (StyleGAN-XL, LAION2BCLIP, BLIP-2, zero-shot) model in the TR0N: Translator Networks for 0-Shot Plug-and-Play Conditional Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Make-a-Scene (filtered) model in the Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the GLIDE (zero-shot) model in the GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the KNN-Diffusion model in the KNN-Diffusion: Image Generation via Large-Scale Retrieval paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the GALIP (CC12m) model in the GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Latent Diffusion (LDM-KL-8-G) model in the High-Resolution Image Synthesis with Latent Diffusion Models paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Stable Diffusion model in the Retrieval-Augmented Multimodal Language Modeling paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the NÜWA (256 x 256) model in the NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the VQ-Diffusion-F model in the Vector Quantized Diffusion Model for Text-to-Image Synthesis paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the StyleGAN-T (Zero-shot, 256x256) model in the StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the RAT-GAN model in the Recurrent Affine Transformation for Text-to-image Synthesis paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the ERNIE-ViLG model in the ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the RA-CM3 (2.7B) model in the Retrieval-Augmented Multimodal Language Modeling paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the CogView2(6B, Finetuned) model in the CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the DALL-E model in the Zero-Shot Text-to-Image Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the VQ-Diffusion-B model in the Vector Quantized Diffusion Model for Text-to-Image Synthesis paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the DM-GAN+CL model in the Improving Text-to-Image Synthesis Using Contrastive Learning paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the FuseDream (few-shot, k=5) model in the FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimization paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the FuseDream (k=5, 256) model in the FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimization paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the FuseDream (k=10, 256) model in the FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimization paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the AttnGAN+CL model in the Improving Text-to-Image Synthesis Using Contrastive Learning paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the CogView2(6B) model in the CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the OP-GAN model in the Semantic Object Accuracy for Generative Text-to-Image Synthesis paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the DM-GAN (256 x 256) model in the NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the Lafite (zero-shot) model in the LAFITE: Towards Language-Free Training for Text-to-Image Generation paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the CogView model in the CogView: Mastering Text-to-Image Generation via Transformers paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the CogView (256 x 256) model in the NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the DALL-E (256 x 256) model in the NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
What metrics were used to measure the DALL-E (12B) model in the Retrieval-Augmented Multimodal Language Modeling paper on the COCO dataset? | FID, Inception score, FID-1, FID-2, FID-4, FID-8, SOA-C |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.