How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("anokimchen/tiny-sd-openvino", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

This model was converted to OpenVINO from segmind/tiny-sd using optimum-intel via the export space.

First make sure you have optimum-intel installed:

pip install optimum[openvino]

To load your model you can do as follows:

from optimum.intel import OVStableDiffusionPipeline

model_id = "anokimchen/tiny-sd-openvino"
model = OVStableDiffusionPipeline.from_pretrained(model_id)
Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for anokimchen/tiny-sd-openvino

Finetuned
segmind/tiny-sd
Finetuned
(4)
this model