prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the pix2pixHD model in the High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs paper on the Cityscapes Labels-to-Photo dataset? | mIoU, FID, Accuracy, Class IOU, Per-class Accuracy, Per-pixel Accuracy, LPIPS |
What metrics were used to measure the Pix2PixHD-AUG model in the Improving Augmentation and Evaluation Schemes for Semantic Image Synthesis paper on the Cityscapes Labels-to-Photo dataset? | mIoU, FID, Accuracy, Class IOU, Per-class Accuracy, Per-pixel Accuracy, LPIPS |
What metrics were used to measure the CRN model in the Photographic Image Synthesis with Cascaded Refinement Networks paper on the Cityscapes Labels-to-Photo dataset? | mIoU, FID, Accuracy, Class IOU, Per-class Accuracy, Per-pixel Accuracy, LPIPS |
What metrics were used to measure the SIMS model in the Semi-parametric Image Synthesis paper on the Cityscapes Labels-to-Photo dataset? | mIoU, FID, Accuracy, Class IOU, Per-class Accuracy, Per-pixel Accuracy, LPIPS |
What metrics were used to measure the USIS model in the USIS: Unsupervised Semantic Image Synthesis paper on the Cityscapes Labels-to-Photo dataset? | mIoU, FID, Accuracy, Class IOU, Per-class Accuracy, Per-pixel Accuracy, LPIPS |
What metrics were used to measure the USIS-Wavelet model in the Wavelet-based Unsupervised Label-to-Image Translation paper on the Cityscapes Labels-to-Photo dataset? | mIoU, FID, Accuracy, Class IOU, Per-class Accuracy, Per-pixel Accuracy, LPIPS |
What metrics were used to measure the SB-GAN model in the Semantic Bottleneck Scene Generation paper on the Cityscapes Labels-to-Photo dataset? | mIoU, FID, Accuracy, Class IOU, Per-class Accuracy, Per-pixel Accuracy, LPIPS |
What metrics were used to measure the pix2pix model in the Image-to-Image Translation with Conditional Adversarial Networks paper on the Cityscapes Labels-to-Photo dataset? | mIoU, FID, Accuracy, Class IOU, Per-class Accuracy, Per-pixel Accuracy, LPIPS |
What metrics were used to measure the CycleGAN model in the Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks paper on the Cityscapes Labels-to-Photo dataset? | mIoU, FID, Accuracy, Class IOU, Per-class Accuracy, Per-pixel Accuracy, LPIPS |
What metrics were used to measure the CoGAN model in the Coupled Generative Adversarial Networks paper on the Cityscapes Labels-to-Photo dataset? | mIoU, FID, Accuracy, Class IOU, Per-class Accuracy, Per-pixel Accuracy, LPIPS |
What metrics were used to measure the SimGAN model in the Learning from Simulated and Unsupervised Images through Adversarial Training paper on the Cityscapes Labels-to-Photo dataset? | mIoU, FID, Accuracy, Class IOU, Per-class Accuracy, Per-pixel Accuracy, LPIPS |
What metrics were used to measure the BiGAN model in the Adversarially Learned Inference paper on the Cityscapes Labels-to-Photo dataset? | mIoU, FID, Accuracy, Class IOU, Per-class Accuracy, Per-pixel Accuracy, LPIPS |
What metrics were used to measure the INADE model in the Diverse Semantic Image Synthesis via Probability Distribution Modeling paper on the Cityscapes Labels-to-Photo dataset? | mIoU, FID, Accuracy, Class IOU, Per-class Accuracy, Per-pixel Accuracy, LPIPS |
What metrics were used to measure the U-GAT-IT model in the U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation paper on the portrait2photo dataset? | Kernel Inception Distance |
What metrics were used to measure the GNR model in the GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!) paper on the selfie2anime dataset? | DFID, FID, LPIPS |
What metrics were used to measure the CouncilGAN model in the GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!) paper on the selfie2anime dataset? | DFID, FID, LPIPS |
What metrics were used to measure the StarGANv2 model in the GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!) paper on the selfie2anime dataset? | DFID, FID, LPIPS |
What metrics were used to measure the DRIT++ model in the GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!) paper on the selfie2anime dataset? | DFID, FID, LPIPS |
What metrics were used to measure the CyCADA model in the CyCADA: Cycle-Consistent Adversarial Domain Adaptation paper on the SYNTHIA Fall-to-Winter dataset? | mIoU, Per-pixel Accuracy, fwIOU |
What metrics were used to measure the FCNs in the wild model in the FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation paper on the SYNTHIA Fall-to-Winter dataset? | mIoU, Per-pixel Accuracy, fwIOU |
What metrics were used to measure the SB-GAN model in the Semantic Bottleneck Scene Generation paper on the ADE-Indoor Labels-to-Photo dataset? | FID |
What metrics were used to measure the EGSDE model in the EGSDE: Unpaired Image-to-Image Translation via Energy-Guided Stochastic Differential Equations paper on the AFHQ (Cat to Dog) dataset? | FID |
What metrics were used to measure the ResViT model in the ResViT: Residual vision transformers for multi-modal medical image synthesis paper on the BRATS dataset? | PSNR |
What metrics were used to measure the SRNet model in the Editing Text in the Wild paper on the KITTI Object Tracking Evaluation 2012 dataset? | Average PSNR |
What metrics were used to measure the GNR model in the GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!) paper on the cat2dog dataset? | DFID, FID, Kernel Inception Distance |
What metrics were used to measure the StarGANv2 model in the GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!) paper on the cat2dog dataset? | DFID, FID, Kernel Inception Distance |
What metrics were used to measure the DRIT++ model in the GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!) paper on the cat2dog dataset? | DFID, FID, Kernel Inception Distance |
What metrics were used to measure the CouncilGAN model in the GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!) paper on the cat2dog dataset? | DFID, FID, Kernel Inception Distance |
What metrics were used to measure the U-GAT-IT model in the U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation paper on the cat2dog dataset? | DFID, FID, Kernel Inception Distance |
What metrics were used to measure the DP-GAN model in the Dual Pyramid Generative Adversarial Networks for Semantic Image Synthesis paper on the ADE20K-Outdoor Labels-to-Photos dataset? | mIoU, Accuracy, FID |
What metrics were used to measure the OASIS model in the You Only Need Adversarial Supervision for Semantic Image Synthesis paper on the ADE20K-Outdoor Labels-to-Photos dataset? | mIoU, Accuracy, FID |
What metrics were used to measure the SPADE model in the Semantic Image Synthesis with Spatially-Adaptive Normalization paper on the ADE20K-Outdoor Labels-to-Photos dataset? | mIoU, Accuracy, FID |
What metrics were used to measure the pix2pixHD model in the High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs paper on the ADE20K-Outdoor Labels-to-Photos dataset? | mIoU, Accuracy, FID |
What metrics were used to measure the CRN model in the Photographic Image Synthesis with Cascaded Refinement Networks paper on the ADE20K-Outdoor Labels-to-Photos dataset? | mIoU, Accuracy, FID |
What metrics were used to measure the SIMS model in the Semi-parametric Image Synthesis paper on the ADE20K-Outdoor Labels-to-Photos dataset? | mIoU, Accuracy, FID |
What metrics were used to measure the CoCosNet model in the Cross-domain Correspondence Learning for Exemplar-based Image Translation paper on the ADE20K-Outdoor Labels-to-Photos dataset? | mIoU, Accuracy, FID |
What metrics were used to measure the InstaGAN model in the InstaGAN: Instance-aware Image-to-Image Translation paper on the Object Transfiguration (sheep-to-giraffe) dataset? | classification score |
What metrics were used to measure the CycleGAN model in the InstaGAN: Instance-aware Image-to-Image Translation paper on the Object Transfiguration (sheep-to-giraffe) dataset? | classification score |
What metrics were used to measure the pix2pix model in the Image-to-Image Translation with Conditional Adversarial Networks paper on the Cityscapes Photo-to-Labels dataset? | Class IOU, Per-class Accuracy, Per-pixel Accuracy |
What metrics were used to measure the CycleGAN model in the Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks paper on the Cityscapes Photo-to-Labels dataset? | Class IOU, Per-class Accuracy, Per-pixel Accuracy |
What metrics were used to measure the CoGAN model in the Coupled Generative Adversarial Networks paper on the Cityscapes Photo-to-Labels dataset? | Class IOU, Per-class Accuracy, Per-pixel Accuracy |
What metrics were used to measure the BiGAN model in the Adversarially Learned Inference paper on the Cityscapes Photo-to-Labels dataset? | Class IOU, Per-class Accuracy, Per-pixel Accuracy |
What metrics were used to measure the SimGAN model in the Learning from Simulated and Unsupervised Images through Adversarial Training paper on the Cityscapes Photo-to-Labels dataset? | Class IOU, Per-class Accuracy, Per-pixel Accuracy |
What metrics were used to measure the HRDA + PiPa model in the PiPa: Pixel- and Patch-wise Self-supervised Learning for Domain Adaptative Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the MIC model in the MIC: Masked Image Consistency for Context-Enhanced Domain Adaptation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the HRDA model in the HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the SePiCo model in the SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the CAMix (w DAFormer) model in the Context-Aware Mixup for Domain Adaptive Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the DAFormer + ProCST model in the ProCST: Boosting Semantic Segmentation Using Progressive Cyclic Style-Transfer paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the DAFormer model in the DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the TransDA-B model in the Smoothing Matters: Momentum Transformer for Domain Adaptive Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the CPSL model in the Class-Balanced Pixel-Level Self-Labeling for Domain Adaptive Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the ProDA+CRA model in the Cross-Region Domain Adaptation for Class-level Alignment paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the ProDA model in the Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the CAMix (w Deeplabv2 ResNet 101) model in the Context-Aware Mixup for Domain Adaptive Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the IAST(ResNet-101) model in the Instance Adaptive Self-Training for Unsupervised Domain Adaptation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the PyCDA (ResNet-101) model in the Constructing Self-motivated Pyramid Curriculums for Cross-Domain Semantic Segmentation: A Non-Adversarial Approach paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the FADA (ResNet-101) model in the Classes Matter: A Fine-grained Adversarial Approach to Cross-domain Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the Bidirectional Learning (ResNet-101) model in the Bidirectional Learning for Domain Adaptation of Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the DADA (ResNet-101) model in the DADA: Depth-aware Domain Adaptation in Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the LRENT (DeepLabv2) model in the Confidence Regularized Self-Training paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the SWD model in the Sliced Wasserstein Discrepancy for Unsupervised Domain Adaptation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the ADVENT model in the ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the Multi-level Adaptation model in the Learning to Adapt Structured Output Space for Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the Discriminative Patch (ResNet-101) model in the Domain Adaptation for Structured Output via Discriminative Patch Representations paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the Single-level Adaptation model in the Learning to Adapt Structured Output Space for Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the CAG-UDA model in the Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the Domain Invariant Structure Extraction model in the All about Structure: Adapting Structural Information across Domains for Boosting Semantic Segmentation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the superpixel + color constancy model in the A Curriculum Domain Adaptation Approach to the Semantic Segmentation of Urban Scenes paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the CDA model in the Curriculum Domain Adaptation for Semantic Segmentation of Urban Scenes paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the FCNs in the wild model in the FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation paper on the SYNTHIA-to-Cityscapes dataset? | mIoU (13 classes) |
What metrics were used to measure the hi model in the (0,4) brane box models paper on the 2017_test set dataset? | 10 way 1~2 shot |
What metrics were used to measure the FQ-GAN model in the Feature Quantization Improves GAN Training paper on the anime-to-selfie dataset? | Kernel Inception Distance |
What metrics were used to measure the U-GAT-IT model in the U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation paper on the anime-to-selfie dataset? | Kernel Inception Distance |
What metrics were used to measure the Shared discriminator GAN model in the Learning Unsupervised Cross-domain Image-to-Image Translation Using a Shared Discriminator paper on the Zebra and Horses dataset? | Kernel Inception Distance |
What metrics were used to measure the cGAN model in the Image-to-Image Translation with Conditional Adversarial Networks paper on the Aerial-to-Map dataset? | Class IOU, Per-class Accuracy, Per-pixel Accuracy |
What metrics were used to measure the DualGAN model in the DualGAN: Unsupervised Dual Learning for Image-to-Image Translation paper on the Aerial-to-Map dataset? | Class IOU, Per-class Accuracy, Per-pixel Accuracy |
What metrics were used to measure the pyramidpix2pix model in the BCI: Breast Cancer Immunohistochemical Image Generation through Pyramid Pix2pix paper on the LLVIP dataset? | PSNR, SSIM |
What metrics were used to measure the cycleGAN model in the BCI: Breast Cancer Immunohistochemical Image Generation through Pyramid Pix2pix paper on the LLVIP dataset? | PSNR, SSIM |
What metrics were used to measure the pix2pixHD model in the BCI: Breast Cancer Immunohistochemical Image Generation through Pyramid Pix2pix paper on the LLVIP dataset? | PSNR, SSIM |
What metrics were used to measure the pix2pix model in the LLVIP: A Visible-infrared Paired Dataset for Low-light Vision paper on the LLVIP dataset? | PSNR, SSIM |
What metrics were used to measure the INADE model in the Diverse Semantic Image Synthesis via Probability Distribution Modeling paper on the Deep-Fashion dataset? | FID |
What metrics were used to measure the CoCosNet model in the Cross-domain Correspondence Learning for Exemplar-based Image Translation paper on the Deep-Fashion dataset? | FID |
What metrics were used to measure the StarGAN v2 model in the StarGAN v2: Diverse Image Synthesis for Multiple Domains paper on the AFHQ dataset? | FID, LPIPS |
What metrics were used to measure the U-GAT-IT model in the U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation paper on the dog2cat dataset? | Kernel Inception Distance |
What metrics were used to measure the U-GAT-IT model in the U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation paper on the vangogh2photo dataset? | Kernel Inception Distance, Frechet Inception Distance, Number of Params |
What metrics were used to measure the PoL (CycleGAN) model in the Powers of layers for image-to-image translation paper on the vangogh2photo dataset? | Kernel Inception Distance, Frechet Inception Distance, Number of Params |
What metrics were used to measure the CycleGAN model in the Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks paper on the vangogh2photo dataset? | Kernel Inception Distance, Frechet Inception Distance, Number of Params |
What metrics were used to measure the MIC model in the MIC: Masked Image Consistency for Context-Enhanced Domain Adaptation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the HRDA + PiPa model in the PiPa: Pixel- and Patch-wise Self-supervised Learning for Domain Adaptative Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the HRDA model in the HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the DAFormer + PiPa model in the PiPa: Pixel- and Patch-wise Self-supervised Learning for Domain Adaptative Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the SePiCo model in the SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the CAMix (w DAFormer) model in the Context-Aware Mixup for Domain Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the DAFormer + ProCST model in the ProCST: Boosting Semantic Segmentation Using Progressive Cyclic Style-Transfer paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the DAFormer model in the DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the TransDA-B model in the Smoothing Matters: Momentum Transformer for Domain Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the DDB model in the Deliberated Domain Bridging for Domain Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the EHTDI* model in the Exploring High-quality Target Domain Information for Unsupervised Domain Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the CPSL model in the Class-Balanced Pixel-Level Self-Labeling for Domain Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.