How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Bugjuhjugjyy/tails-diffusion", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

tails diffusion on Stable Diffusion via Dreambooth trained on the fast-DreamBooth.ipynb by TheLastBen notebook

model by Bugjuhjugjyy

This your the Stable Diffusion model fine-tuned the tails diffusion concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the instance_prompt(s): images

You can also train your own concepts and upload them to the library by using the fast-DremaBooth.ipynb by TheLastBen. And you can run your new concept via diffusers: Colab Notebook for Inference, Spaces with the Public Concepts loaded

Here are the images used for training this concept:

images images 0 images 1 images 2 images 3 images 4 images 5 images 6 images 7

Downloads last month
14
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support