File size: 1,969 Bytes
9e12996 2407cab 9e12996 2407cab 9e12996 2407cab | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | ---
library_name: diffusers
tags:
- text-to-image
license: apache-2.0
inference: false
---
# Sub-path Linear Approximation Model (SLAM): SD1.5
Paper: [https://arxiv.org/abs/2404.13903](https://arxiv.org/abs/2404.13903)</br>
Project Page: [https://subpath-linear-approx-model.github.io/](https://subpath-linear-approx-model.github.io/)</br>
The checkpoint is a distilled from [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps.
## Usage
First, install the latest version of the Diffusers library as well as peft, accelerate and transformers.
```bash
pip install --upgrade pip
pip install --upgrade diffusers transformers accelerate peft
```
We implement SLAM to be compatible with [LCMScheduler](https://huggingface.co/docs/diffusers/v0.22.3/en/api/schedulers/lcm#diffusers.LCMScheduler). You can use SLAM just like you use LCM, with guidance_scale set to 1 constantly.
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("alimama-creative/slam-sd1.5")
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float16)
prompt = "a painting of a majestic kingdom with towering castles, lush gardens, ice and snow world"
num_inference_steps = 2
images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=1, lcm_origin_steps=50, output_type="pil").images
```
 |