| | --- |
| | license: apache-2.0 |
| | tags: |
| | - Kandinsky |
| | - text-image |
| | - text2image |
| | - diffusion |
| | - latent diffusion |
| | - mCLIP-XLMR |
| | - mT5 |
| | --- |
| | |
| | # Kandinsky 2.0 |
| |
|
| |
|
| | Kandinsky 2.0 — the first multilingual text2image model. |
| |
|
| | [Open In Colab](https://colab.research.google.com/drive/1uPg9KwGZ2hJBl9taGA_3kyKGw12Rh3ij?usp=sharing) |
| |
|
| | [GitHub repository](https://github.com/ai-forever/Kandinsky-2.0) |
| |
|
| | [Habr post](https://habr.com/ru/company/sberbank/blog/701162/) |
| |
|
| | [Demo](https://rudalle.ru/) |
| |
|
| | **UNet size: 1.2B parameters** |
| |
|
| |  |
| |
|
| | It is a latent diffusion model with two multi-lingual text encoders: |
| | * mCLIP-XLMR (560M parameters) |
| | * mT5-encoder-small (146M parameters) |
| |
|
| |
|
| | These encoders and multilingual training datasets unveil the real multilingual text2image generation experience! |
| |
|
| |  |
| |
|
| | # How to use |
| |
|
| | ```python |
| | pip install "git+https://github.com/ai-forever/Kandinsky-2.0.git" |
| | |
| | from kandinsky2 import get_kandinsky2 |
| | model = get_kandinsky2('cuda', task_type='text2img') |
| | images = model.generate_text2img('кошка в космосе', batch_size=4, h=512, w=512, num_steps=75, denoised_type='dynamic_threshold', dynamic_threshold_v=99.5, sampler='ddim_sampler', ddim_eta=0.01, guidance_scale=10) |
| | ``` |
| |
|
| | # Authors |
| |
|
| | + Arseniy Shakhmatov: [Github](https://github.com/cene555), [Blog](https://t.me/gradientdip) |
| | + Anton Razzhigaev: [Github](https://github.com/razzant), [Blog](https://t.me/abstractDL) |
| | + Aleksandr Nikolich: [Github](https://github.com/AlexWortega), [Blog](https://t.me/lovedeathtransformers) |
| | + Vladimir Arkhipkin: [Github](https://github.com/oriBetelgeuse) |
| | + Igor Pavlov: [Github](https://github.com/boomb0om) |
| | + Andrey Kuznetsov: [Github](https://github.com/kuznetsoffandrey) |
| | + Denis Dimitrov: [Github](https://github.com/denndimitrov) |
| |
|
| |
|
| |
|
| |
|