How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-breaks-256", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Denoising Diffusion Probabilistic Model trained on teticio/audio-diffusion-breaks-256 to generate mel spectrograms of 256x256 corresponding to 5 seconds of audio. The audio consists of 30,000 samples that have been used in music, sourced from WhoSampled and YouTube. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference.

Downloads last month
54
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train teticio/audio-diffusion-breaks-256

Spaces using teticio/audio-diffusion-breaks-256 7