import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-256", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]De-noising Diffusion Probabilistic Model trained on teticio/audio-diffusion-256 to generate mel spectrograms of 256x256 corresponding to 5 seconds of audio. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference.
- Downloads last month
- 92
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support