merge
Browse filesMerge branch 'main' of https://huggingface.co/Vargol/PixArt-Sigma_2k_16bit
README.md
CHANGED
|
@@ -8,30 +8,26 @@ PixArt-alpha/PixArt-Sigma-XL-2-2K-MS
|
|
| 8 |
with the models loaded and saved in fp16 and bf16 formats, roughly halfing their sizes.
|
| 9 |
It can be used where download bandwith, memory or diskspace are relatively low, a T4 Colab instance for example.
|
| 10 |
|
| 11 |
-
|
| 12 |
-
|
|
|
|
| 13 |
|
| 14 |
-
|
| 15 |
|
| 16 |
-
|
| 17 |
-
A simple Colab notebook can be found at https://github.com/Vargol/StableDiffusionColabs/blob/main/PixArt/PixArt_Sigma.ipynb
|
| 18 |
|
| 19 |
-
a Diffusers script looks like this.
|
| 20 |
|
| 21 |
```py
|
| 22 |
import random
|
| 23 |
import sys
|
| 24 |
import torch
|
| 25 |
-
from diffusers
|
| 26 |
-
from scripts.diffusers_patches import pixart_sigma_init_patched_inputs, PixArtSigmaPipeline
|
| 27 |
|
| 28 |
-
assert getattr(Transformer2DModel, '_init_patched_inputs', False), "Need to Upgrade diffusers: pip install git+https://github.com/huggingface/diffusers"
|
| 29 |
-
setattr(Transformer2DModel, '_init_patched_inputs', pixart_sigma_init_patched_inputs)
|
| 30 |
device = 'mps'
|
| 31 |
weight_dtype = torch.bfloat16
|
| 32 |
|
| 33 |
pipe = PixArtSigmaPipeline.from_pretrained(
|
| 34 |
-
"
|
| 35 |
torch_dtype=weight_dtype,
|
| 36 |
variant="fp16",
|
| 37 |
use_safetensors=True,
|
|
|
|
| 8 |
with the models loaded and saved in fp16 and bf16 formats, roughly halfing their sizes.
|
| 9 |
It can be used where download bandwith, memory or diskspace are relatively low, a T4 Colab instance for example.
|
| 10 |
|
| 11 |
+
**NOTE: This Model has been converted but not successfully tested, during the memory effecient attention
|
| 12 |
+
it generates 16Gb buffer, this appears break an MPS limitation, but it may also mean if requires more than 16Gb even
|
| 13 |
+
with the 16 bit model**
|
| 14 |
|
| 15 |
+
The diffusers script below assumes those with more memory on none MPS GPU's have more luck running it!
|
| 16 |
|
| 17 |
+
a Diffusers script looks like this, **currently (25th April 2024) you need will to install diffusers from source**.
|
|
|
|
| 18 |
|
|
|
|
| 19 |
|
| 20 |
```py
|
| 21 |
import random
|
| 22 |
import sys
|
| 23 |
import torch
|
| 24 |
+
from diffusers from PixArtSigmaPipeline
|
|
|
|
| 25 |
|
|
|
|
|
|
|
| 26 |
device = 'mps'
|
| 27 |
weight_dtype = torch.bfloat16
|
| 28 |
|
| 29 |
pipe = PixArtSigmaPipeline.from_pretrained(
|
| 30 |
+
"Vargol/PixArt-Sigma_2k_16bit",
|
| 31 |
torch_dtype=weight_dtype,
|
| 32 |
variant="fp16",
|
| 33 |
use_safetensors=True,
|