Upload folder using huggingface_hub
Browse files- README.md +3 -109
- transformer/diffusion_pytorch_model.safetensors +2 -2
README.md
CHANGED
|
@@ -1,109 +1,3 @@
|
|
| 1 |
-
---
|
| 2 |
-
license:
|
| 3 |
-
---
|
| 4 |
-
|
| 5 |
-
<div align="center">
|
| 6 |
-
<picture>
|
| 7 |
-
<img src="assets/KANDINSKY_LOGO_1_BLACK.png">
|
| 8 |
-
</picture>
|
| 9 |
-
</div>
|
| 10 |
-
|
| 11 |
-
<div align="center">
|
| 12 |
-
<a href="https://habr.com/ru/companies/sberbank/articles/951800/">Habr</a> | <a href="https://kandinskylab.ai/">Project Page</a> | <a href="https://arxiv.org/abs/2511.14993">Technical Report</a> | 🤗 <a href=https://huggingface.co/collections/kandinskylab/kandinsky-50-video-lite> Video Lite </a> / <a href=https://huggingface.co/collections/kandinskylab/kandinsky-50-video-pro> Video Pro </a> / <a href=https://huggingface.co/collections/kandinskylab/kandinsky-50-image-lite> Image Lite </a> | <a href="https://huggingface.co/docs/diffusers/main/en/api/pipelines/kandinsky5"> 🤗 Diffusers </a> | <a href="https://github.com/kandinskylab/kandinsky-5/blob/main/comfyui/README.md">ComfyUI</a>
|
| 13 |
-
</div>
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
## Kandinsky 5.0 T2V Pro Diffusers
|
| 17 |
-
Kandinsky 5.0 Pro line-up of large high quality video generation models (19B parameters). It offers high qualty generation in HD and more generation formats like I2V.
|
| 18 |
-
|
| 19 |
-
**⚠️ Warning!** all Pro models should be infered with pipeline.enable_model_cpu_offload()
|
| 20 |
-
```python
|
| 21 |
-
import torch
|
| 22 |
-
from diffusers import Kandinsky5T2VPipeline
|
| 23 |
-
from diffusers.utils import export_to_video
|
| 24 |
-
|
| 25 |
-
# Load the pipeline
|
| 26 |
-
model_id = "kandinskylab/Kandinsky-5.0-T2V-Pro-distilled-5s-Diffusers"
|
| 27 |
-
pipe = Kandinsky5T2VPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16)
|
| 28 |
-
|
| 29 |
-
pipe = pipe.to("cuda")
|
| 30 |
-
pipe.transformer.set_attention_backend("flex") # <--- Set attention bakend to Flex
|
| 31 |
-
pipe.enable_model_cpu_offload() # <--- Enable cpu offloading for single GPU inference
|
| 32 |
-
pipe.transformer.compile(mode="max-autotune-no-cudagraphs", dynamic=True) # <--- Compile with max-autotune-no-cudagraphs
|
| 33 |
-
|
| 34 |
-
# Generate video
|
| 35 |
-
prompt = "A cat and a dog baking a cake together in a kitchen."
|
| 36 |
-
negative_prompt = "Static, 2D cartoon, cartoon, 2d animation, paintings, images, worst quality, low quality, ugly, deformed, walking backwards"
|
| 37 |
-
|
| 38 |
-
output = pipe(
|
| 39 |
-
prompt=prompt,
|
| 40 |
-
negative_prompt=negative_prompt,
|
| 41 |
-
height=768,
|
| 42 |
-
width=1024,
|
| 43 |
-
num_frames=121, # ~5 seconds at 24fps
|
| 44 |
-
num_inference_steps=16,
|
| 45 |
-
guidance_scale=1.0,
|
| 46 |
-
).frames[0]
|
| 47 |
-
|
| 48 |
-
export_to_video(output, "output.mp4", fps=24, quality=9)
|
| 49 |
-
```
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
# Authors
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
<B>Core Contributors</B>:
|
| 57 |
-
- <B>Video</B>: Alexey Letunovskiy, Maria Kovaleva, Lev Novitskiy, Denis Koposov, Dmitrii
|
| 58 |
-
Mikhailov, Anastasiia Kargapoltseva, Anna Dmitrienko, Anastasia Maltseva
|
| 59 |
-
- <B>Image & Editing</B>: Nikolai Vaulin, Nikita Kiselev, Alexander Varlamov
|
| 60 |
-
- <B>Pre-training Data</B>: Ivan Kirillov, Andrey Shutkin, Nikolai Vaulin, Ilya Vasiliev
|
| 61 |
-
- <B>Post-training Data</B>: Julia Agafonova, Anna Averchenkova, Olga Kim
|
| 62 |
-
- <B>Research Consolidation & Paper</B>: Viacheslav Vasilev, Vladimir Polovnikov
|
| 63 |
-
|
| 64 |
-
<B>Contributors</B>: Yury Kolabushin, Kirill Chernyshev, Alexander Belykh, Mikhail Mamaev, Anastasia Aliaskina, Kormilitsyn Semen, Tatiana Nikulina, Olga Vdovchenko, Polina Mikhailova, Polina
|
| 65 |
-
Gavrilova, Nikita Osterov, Bulat Akhmatov
|
| 66 |
-
|
| 67 |
-
<B>Track Leaders</B>: Vladimir Arkhipkin, Vladimir Korviakov, Nikolai Gerasimenko, Denis
|
| 68 |
-
Parkhomenko
|
| 69 |
-
|
| 70 |
-
<B>Project Supervisor</B>: Denis Dimitrov
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
# Citation
|
| 74 |
-
|
| 75 |
-
```
|
| 76 |
-
@misc{kandinsky2025,
|
| 77 |
-
author = {Alexander Belykh and Alexander Varlamov and Alexey Letunovskiy and Anastasia Aliaskina and Anastasia Maltseva and Anastasiia Kargapoltseva and Andrey Shutkin and Anna Averchenkova and Anna Dmitrienko and Bulat Akhmatov and Denis Dimitrov and Denis Koposov and Denis Parkhomenko and Dmitrii and Ilya Vasiliev and Ivan Kirillov and Julia Agafonova and Kirill Chernyshev and Kormilitsyn Semen and Lev Novitskiy and Maria Kovaleva and Mikhail Mamaev and Mikhailov and Nikita Kiselev and Nikita Osterov and Nikolai Gerasimenko and Nikolai Vaulin and Olga Kim and Olga Vdovchenko and Polina Gavrilova and Polina Mikhailova and Tatiana Nikulina and Viacheslav Vasilev and Vladimir Arkhipkin and Vladimir Korviakov and Vladimir Polovnikov and Yury Kolabushin},
|
| 78 |
-
title = {Kandinsky 5.0: A family of diffusion models for Video & Image generation},
|
| 79 |
-
howpublished = {\url{https://github.com/kandinskylab/Kandinsky-5}},
|
| 80 |
-
year = 2025
|
| 81 |
-
}
|
| 82 |
-
|
| 83 |
-
@misc{mikhailov2025nablanablaneighborhoodadaptiveblocklevel,
|
| 84 |
-
title={$\nabla$NABLA: Neighborhood Adaptive Block-Level Attention},
|
| 85 |
-
author={Dmitrii Mikhailov and Aleksey Letunovskiy and Maria Kovaleva and Vladimir Arkhipkin
|
| 86 |
-
and Vladimir Korviakov and Vladimir Polovnikov and Viacheslav Vasilev
|
| 87 |
-
and Evelina Sidorova and Denis Dimitrov},
|
| 88 |
-
year={2025},
|
| 89 |
-
eprint={2507.13546},
|
| 90 |
-
archivePrefix={arXiv},
|
| 91 |
-
primaryClass={cs.CV},
|
| 92 |
-
url={https://arxiv.org/abs/2507.13546},
|
| 93 |
-
}
|
| 94 |
-
```
|
| 95 |
-
|
| 96 |
-
# Acknowledgements
|
| 97 |
-
|
| 98 |
-
We gratefully acknowledge the open-source projects and research that made Kandinsky 5.0 possible:
|
| 99 |
-
|
| 100 |
-
- [PyTorch](https://pytorch.org/) — for model training and inference.
|
| 101 |
-
- [FlashAttention 3](https://github.com/Dao-AILab/flash-attention) — for efficient attention and faster inference.
|
| 102 |
-
- [Qwen2.5-VL](https://github.com/QwenLM/Qwen3-VL) — for providing high-quality text embeddings.
|
| 103 |
-
- [CLIP](https://github.com/openai/CLIP) — for robust text–image alignment.
|
| 104 |
-
- [HunyuanVideo](https://huggingface.co/tencent/HunyuanVideo) — for video latent encoding and decoding.
|
| 105 |
-
- [MagCache](https://github.com/Zehong-Ma/MagCache) — for accelerated inference.
|
| 106 |
-
- [ComfyUI](https://github.com/comfyanonymous/ComfyUI) — for integration into node-based workflows.
|
| 107 |
-
|
| 108 |
-
We deeply appreciate the contributions of these communities and researchers to the open-source ecosystem.
|
| 109 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
transformer/diffusion_pytorch_model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8efcc9284fd9a552acd73da08f23db7416d0dafa96a71586a4fd9bd677329d56
|
| 3 |
+
size 43385935392
|