import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("neph1/framepack-camera-controls", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]Loras mentioned in this article: https://huggingface.co/blog/neph1/framepack-camera-control-loras
Trigger word for each is "cam_control". Trigger and training prompts:
If the training video bleeds through (like a grey cube appearing in the generation), lower the strength. Between 0.5 and 1.0
250605: Added two more loras
- Downloads last month
- 62
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for neph1/framepack-camera-controls
Base model
lllyasviel/FramePackI2V_HY