Instructions to use wav/TemporalNet2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use wav/TemporalNet2 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("wav/TemporalNet2", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
How to combine with HED?
#3
by redromnon - opened
I'm new to this and aware that TemporalNet works best with other controlnet conditionings like HED. What tensor/image I should be using for the HED edge detection?
Here's how I intended to apply HED in the code:
Add multiple conditioning controlnet in pipe:
controlnet=[ ControlNetModel.from_pretrained("wav/TemporalNet2", torch_dtype=torch.float16), ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-hed", torch_dtype=torch.float16) ]Initiliase HED detector
In
stylize_video(), convertflow_imgtensor to PIL.Image usingtorchvision.transforms.ToPILImageGet HED image from above PIL.Image
In pipe, use arguments:
control_image=[control_img, hed_image],controlnet_conditioning_scale=[1.0, 1.0]