Instructions to use aharshit123456/learn_ddpm with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use aharshit123456/learn_ddpm with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("aharshit123456/learn_ddpm", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Query
#1
by SampadKar - opened
Could you tell me how long it took to train the current model, how many epochs were used, and which GPU was used for training?
Why didn’t you use a VAE for downsampling the images first? As far as I know, training diffusion models at full resolution is extremely computationally expensive.
i have a few more questions to ask, it would be very helpful if you mention your LinkedIn ID