text
stringlengths 0
2.18k
|
|---|
--------------------------------------------------- Unstructured Sub-Title Begin
|
5 Experiments
|
--------------------------------------------------- Unstructured Sub-Title End
|
We sought to evaluate and refine the performance of our proposed latent diffusion architecture in our experimental analysis. To this end, we employed automatic metrics, specifically FID-CLIP curves on the COCO-30K dataset, to obtain the optimal guidance-scale value and compare Kandinsky with competitors (cf. Figure 4). Furthermore, we conducted investigations with various image prior setups, exploring the impact of different configurations on the performance. These setups included: no prior, utilizing text embeddings directly; linear prior, implementing one linear layer; ResNet prior, consisting of 18 residual MLP blocks; and transformer diffusion prior.
|
An essential aspect of our experiments was the exploration of the effect of latent quantization within the MoVQ autoencoder. We examined the outputs with latent quantization, both enabled and disabled, to better comprehend its influence on image generation quality.
|
To ensure a comprehensive evaluation, we also included an assessment of the IF model ¹², which is the closest open-source competitor to our proposed model. For this purpose, we computed FID scores for the IF model ¹³ (Table 1).
|
However, we acknowledged the limitations of automatic metrics that become obvious when it comes to capturing user experience nuances. Hence, in addition to the FID-CLIP curves, we conducted a blind human evaluation to obtain insightful feed-
|
¹¹https://github.com/ai-forever/MoVQGAN
|
¹²https://github.com/deep-floyd/IF
|
¹³https://github.com/mseitzer/pytorch-fid
|
--------------------------------------------------- Unstructured Image Begin
|
Clip-FID Curves
|
FID
|
35 30 25 20 15 10
|
CLIP similarity
|
0.23 0.24 0.25 0.26 0.27 0.28 0.29 0.30
|
Diffusion prior Diffusion prior (quantized decoding) Linear prior Resnet prior No prior
|
--------------------------------------------------- Unstructured Image End
|
--------------------------------------------------- Unstructured Caption Begin
|
Figure 4: CLIP-FID curves for different setups.
|
--------------------------------------------------- Unstructured Caption End
|
--------------------------------------------------- Unstructured Image Begin
|
Original image prior
|
cat-500 prior
|
--------------------------------------------------- Unstructured Image End
|
--------------------------------------------------- Unstructured Caption Begin
|
Figure 5: Image generation results with prompt "astronaut riding a horse" for original image prior and linear prior trained on 500 pairs of images with cats.
|
--------------------------------------------------- Unstructured Caption End
|
back and validate the quality of the generated images from the perspective of human perception based on the DrawBench dataset (Saharia et al., 2022b).
|
The combination of automatic metrics and human evaluation provides a comprehensive assessment of Kandinsky performance, enabling us to make informed decisions about the effectiveness and usability of our proposed image prior to design.
|
--------------------------------------------------- Unstructured Sub-Title Begin
|
6 Results
|
--------------------------------------------------- Unstructured Sub-Title End
|
Our experiments and evaluations have showcased the capabilities of Kandinsky architecture in text-to-image synthesis. Kandinsky achieved the FID score of 8.03 on the COCO-30K validation set at a resolution of 256×256, which puts it in close competition with the state-of-the-art models, and among the top performers within open-source systems. Our methodical ablation studies further dissected the performance of different configurations: quantization of latent codes in MoVQ slightly improves
|
--------------------------------------------------- Unstructured Plain Text Format 1
|
--------------------------------------------------- Unstructured Image Begin
|
Kandinsky vs IF
|
100% 75% 50% 25%
|
Fidelity Alignment
|
Kandinsky IF
|
Kandinsky vs SD-xl
|
Fidelity Alignment
|
Kandinsky SD-xl
|
Kandinsky vs SD-2.1
|
Fidelity Alignment
|
Kandinsky SD-2.1
|
Kandinsky vs MidJourney-v5.2
|
Fidelity Alignment
|
Kandinsky MidJourney-v5.2
|
--------------------------------------------------- Unstructured Image End
|
--------------------------------------------------- Unstructured Caption Begin
|
Figure 6: Human evaluation: competitors vs Kandinsky with diffusion prior on Drawbench. The total count of votes is 5000.
|
--------------------------------------------------- Unstructured Caption End
|
--------------------------------------------------- Unstructured Caption Begin
|
Table 4: Sber-MoVQGAN comparison with competitors on ImageNet dataset.
|
--------------------------------------------------- Unstructured Caption End
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.