How to use rynmurdock/CLIP_DRaFT with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("rynmurdock/CLIP_DRaFT") prompt = "a horse with many eyes" image = pipe(prompt).images[0]
Made from my implementation of DRaFT using CLIP.
Weights for this model are available in Safetensors format.
Base model