How to use from the
Use from the
DiffusionKit library
# Pipeline for Stable Diffusion 3
from diffusionkit.mlx import DiffusionPipeline

pipeline = DiffusionPipeline(
	shift=3.0,
	use_t5=False,
	model_version=argmaxinc/mlx-stable-diffusion-3-medium,
	low_memory_mode=True,
	a16=True,
	w16=True,
)
# Image Generation
HEIGHT = 512
WIDTH = 512
NUM_STEPS = 50
CFG_WEIGHT = 5

image, _ = pipeline.generate_image(
  "a photo of a cat",
  cfg_weight=CFG_WEIGHT,
  num_steps=NUM_STEPS,
  latent_size=(HEIGHT // 8, WIDTH // 8),
)

Stable Diffusion 3 Medium on DiffusionKit MLX!

Check out the original model!

Check out the DiffusionKit github repository!

SD3 Example Output

Usage

  • Create conda environment

conda create -n diffusionkit python=3.11 -y
conda activate diffusionkit
pip install diffusionkit
  • Run the cli command

diffusionkit-cli --prompt "detailed cinematic dof render of a \
detailed MacBook Pro on a wooden desk in a dim room with items \
around, messy dirty room. On the screen are the letters 'SD3 on \
DiffusionKit' glowing softly. High detail hard surface render" \
--model-version argmaxinc/mlx-stable-diffusion-3-medium \
--height 768 \
--width 1360 \
--seed 1001 \
--step 50 \
--cfg 7 \
--output ~/Desktop/sd3_on_mac.png
Downloads last month
3,707
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for argmaxinc/mlx-stable-diffusion-3-medium

Finetuned
(78)
this model

Collection including argmaxinc/mlx-stable-diffusion-3-medium