Instructions to use muneebable/class-conditional-diffusion-cub-200 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use muneebable/class-conditional-diffusion-cub-200 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("muneebable/class-conditional-diffusion-cub-200", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,7 +6,7 @@ pipeline_tag: text-to-image
|
|
| 6 |
tags:
|
| 7 |
- pytorch
|
| 8 |
- diffusers
|
| 9 |
-
-
|
| 10 |
- diffusion-models-class
|
| 11 |
datasets:
|
| 12 |
- dpdl-benchmark/caltech_birds2011
|
|
|
|
| 6 |
tags:
|
| 7 |
- pytorch
|
| 8 |
- diffusers
|
| 9 |
+
- conditional-image-generation
|
| 10 |
- diffusion-models-class
|
| 11 |
datasets:
|
| 12 |
- dpdl-benchmark/caltech_birds2011
|