Instructions to use Nesslovver/Wan_v-e with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Nesslovver/Wan_v-e with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("fill-in-base-model", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("Nesslovver/Wan_v-e") prompt = "-" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("fill-in-base-model", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("Nesslovver/Wan_v-e")
prompt = "-"
image = pipe(prompt).images[0]YAML Metadata Error:"base_model" is not allowed to be empty
Wanv.e.

- Prompt
- -

- Prompt
- -
Model description
Character lora for t2v low noise. Epoch7
Trigger words
You should use Vane$$@ to trigger the image generation.
Download model
Download them in the Files & versions tab.
- Downloads last month
- 1
# Gated model: Login with a HF token with gated access permission hf auth login