--- base_model: - Wan-AI/Wan2.2-T2V-A14B pipeline_tag: text-to-video license: apache-2.0 language: - en - zh tags: - Wan2.2 --- # Wan2.2-Turbo We are excited to release the distilled version of Wan2.2, our fine-tuned variant of the t2v video generation model, which offers the following advantages: - **Fast**: Video generation now requires only 4 steps without the need of CFG trick, leading to x20 speed-up - **High-quality**: The distilled model delivers visuals on par with the base model in most scenarios, sometimes even better. - **Complex Motion Generation**: Despite the reduction to just 4 steps, the model retains excellent motion dynamics in the generated scenes. ## Usage This model is designed to work seamlessly with [**Aquiles-Image**](https://github.com/Aquiles-ai/Aquiles-Image), providing an OpenAI-compatible API for video generation: ```bash pip install aquiles-image aquiles-image serve --model "wan2.2-turbo" ``` Learn more in the [full documentation](https://aquiles-ai.github.io/aquiles-image-docs/). ## Example of a Video Generated with This Model
***Generated with prompt:** A direct continuation of the existing shot of a chameleon crawling slowly along a mossy branch. Begin with the chameleon already mid-step, camera tracking right at the same close, eye-level angle. After three seconds, its eyes swivel independently, one pausing to glance toward the lens before it resumes moving forward. Maintain the 100 mm anamorphic lens with shallow depth of field, dappled rainforest light, faint humidity haze, and subtle film grain. The moss texture and background greenery should remain consistent, with the chameleon's deliberate gait flowing naturally as if no cut occurred.* > **Requirements**: H100 or A100-80GB GPU