File size: 4,213 Bytes
45588c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e3ee182
 
2282799
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e3ee182
2282799
e3ee182
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2282799
 
 
 
 
 
 
 
 
 
 
 
e3ee182
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
language:
- en
license: other
license_name: stabilityai-community
license_link: https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE
library_name: diffusers
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-3.5
- stable-diffusion-3.5-large
- pruning
- structural-pruning
- obs-diff
- pytorch
base_model: stabilityai/stable-diffusion-3.5-large
pipeline_tag: text-to-image
---




# OBS-Diff Structured Pruning for Stable Diffusion 3.5-Large

<div style="
    display: flex; 
    flex-wrap: wrap; 
    align-items: flex-start; 
    gap: 20px; 
    border: 1px solid #e0e0e0; 
    padding: 20px; 
    border-radius: 10px; 
    margin-bottom: 20px; 
    background-color: #fff;
">
  
  <div style="flex: 1; min-width: 280px; max-width: 100%;">
    <img src="teaser.jpg" alt="OBS-Diff" style="width: 100%; height: auto; border-radius: 5px;" />
  </div>

  <div style="flex: 2; min-width: 300px;">
    <h4 style="margin-top: 0;">✂️ <a href="https://alrightlone.github.io/OBS-Diff-Webpage/">OBS-Diff: Accurate Pruning for Diffusion Models in One-Shot</a></h4>
    <p>
      <em><b>Junhan Zhu</b>, Hesong Wang, Mingluo Su, Zefang Wang, Huan Wang*</em>
      <br>
      <a href="https://arxiv.org/abs/2510.06751"><img src="https://img.shields.io/badge/Preprint-arXiv-b31b1b.svg?style=flat-square"></a>
      <a href="https://github.com/Alrightlone/OBS-Diff"><img src="https://img.shields.io/github/stars/Alrightlone/OBS-Diff?style=flat-square&logo=github"></a>
    </p>
    <p>
      The <b>first training-free, one-shot pruning framework</b> for Diffusion Models, supporting diverse architectures and pruning granularities. Uses Optimal Brain Surgeon (OBS) to achieve <b>SOTA</b> compression with high generative quality.
    </p>
  </div>

</div>

This repository contains the structured pruned checkpoints for Stable Diffusion 3.5 Large. These models were compressed using OBS-Diff, an accurate one-shot pruning method designed to reduce model size and accelerate inference while preserving high-quality image generation capabilities.

By removing redundant parameters from the Transformer backbone, we offer variants with different sparsity levels (15% - 30%), allowing for a flexible trade-off between efficiency and performance.
![](sd3-5-2.png)
![](sd3-5-1.png)
![](sd3-5-3.png)

### Pruned Transformer Variants
| Sparsity (%) | 0 (Dense) |  15 | 20 | 25 | 30 |
| :--- | :---: | :---: |  :---: | :---: | :---: |
| **Params (B)** | 8.06 | 7.28 | 7.02 | 6.76 | 6.54 |

### How to use the pruned model

1. Download the base model (SD3.5-Large) from [huggingface](https://huggingface.co/stabilityai/stable-diffusion-3.5-large) or ModelScope.

2. Download the pruned weights (.pth files) and use `torch.load` to replace the original Transformer in the pipeline.

3. Run inference using the code below.

``` python
import os
import torch
from diffusers import StableDiffusion3Pipeline
from PIL import Image


# 1. Load the base SD3.5-Large model
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large", torch_dtype=torch.float16)

# 2. Swap the original Transformer with the pruned Transformer checkpoint
# Note: Ensure the path points to your downloaded .pth file
pruned_transformer_path = "/path/to/sparsity_30/pruned_model.pth"
pipe.transformer = torch.load(pruned_transformer_path, weights_only=False)
pipe = pipe.to("cuda")

total_params = sum(p.numel() for p in pipe.transformer.parameters())
print(f"Total Transformer parameters: {total_params / 1e6:.2f} M")

image = pipe(
    prompt="photo of a delicious hamburger with fries and a coke on a wooden table, professional food photography, bokeh",
    negative_prompt=None,
    height=1024,
    width=1024,
    num_inference_steps=30,
    guidance_scale=7.0,
    generator=torch.Generator("cuda").manual_seed(42)
).images[0]

image.save("output_pruned.png")

```

### Citation
If you find this work useful, please consider citing:

```bibtex
@article{zhu2025obs,
  title={OBS-Diff: Accurate Pruning For Diffusion Models in One-Shot},
  author={Zhu, Junhan and Wang, Hesong and Su, Mingluo and Wang, Zefang and Wang, Huan},
  journal={arXiv preprint arXiv:2510.06751},
  year={2025}
}
```