Simple diffusion
Collection
2 items • Updated • 1
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("AiArtLab/sdxs-2b", dtype=torch.bfloat16, device_map="cuda")
prompt = "sdxs-2b"
image = pipe(prompt).images[0]Configuration Parsing Warning:Config file model_index.json cannot be fetched (too big)
XS Size, Excess Quality
Train status: 4xRTX5090 / we need more gold / support us please..
At AiArtLab, we strive to create a free, compact and fast model that can be trained on consumer graphics cards.
#!pip install -U torch torchvision
#!pip install -U diffusers accelerate transformers
import torch
from diffusers import DiffusionPipeline
device = "cuda" if torch.cuda.is_available() else "cpu"
dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
pipe_id = "AiArtLab/sdxs-2b"
pipe = DiffusionPipeline.from_pretrained(
pipe_id,
torch_dtype=dtype,
trust_remote_code=True
).to(device)
refined = "A blonde-haired Red Eyes girl with a hair ribbon, half-updo, and tsurime stands solo in a flower field holding a bouquet with a serene smile, wearing green overalls, a white shirt, rolled-up sleeves, and a straw hat with a flower while looking at the viewer under volumetric and natural lighting with a Dutch angle."
negative_prompt = "worst quality, low quality, loli, low details, blurry, jpeg artifacts, unfinished, sketch, sepia, missing limb, text, bad anatomy, bad proportions, bad hands, missing fingers"
output = pipe(
prompt=refined,
negative_prompt=negative_prompt,
)
image = output.images[0]
image.show()
refined = pipe.refine_prompts("1girl")
print(refined[0])
Donated: 0$
Thanks for your support!
Please contact with us if you may provide some GPU's or money on training
@misc{sdxs,
title={Simple Diffusion XS-2b},
author={recoilme and AiArtLab Team},
url={https://huggingface.co/AiArtLab/sdxs-2b},
year={2026}
}