radames commited on
Commit
6db85f5
·
verified ·
1 Parent(s): 5bccdd9

Add files using upload-large-folder tool

Browse files
README.md CHANGED
@@ -1,7 +1,9 @@
1
  ---
2
- library_name: diffusers
3
  tags:
4
  - pruna-ai
 
 
5
  ---
6
 
7
  # Model Card for radames/smashed-stabilityai-sd-turbo
@@ -16,19 +18,20 @@ First things first, you need to install the pruna library:
16
  pip install pruna
17
  ```
18
 
19
- You can [use the diffusers library to load the model](https://huggingface.co/radames/smashed-stabilityai-sd-turbo?library=diffusers) but this might not include all optimizations by default.
20
 
21
  To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
22
 
23
  ```python
24
  from pruna import PrunaModel
25
 
26
- loaded_model = PrunaModel.from_hub(
27
  "radames/smashed-stabilityai-sd-turbo"
28
  )
 
29
  ```
30
 
31
- After loading the model, you can use the inference methods of the original model. Take a look at the [documentation](https://pruna.readthedocs.io/en/latest/index.html) for more usage information.
32
 
33
  ## Smash Configuration
34
 
@@ -36,15 +39,35 @@ The compression configuration of the model is stored in the `smash_config.json`
36
 
37
  ```bash
38
  {
39
- "batcher": null,
40
- "cacher": "deepcache",
41
- "compiler": "stable_fast",
42
- "factorizer": null,
43
- "pruner": null,
44
- "quantizer": null,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  "deepcache_interval": 2,
46
  "batch_size": 1,
47
  "device": "cuda",
 
48
  "save_fns": [
49
  "save_before_apply"
50
  ],
@@ -52,12 +75,8 @@ The compression configuration of the model is stored in the `smash_config.json`
52
  "diffusers"
53
  ],
54
  "reapply_after_load": {
55
- "factorizer": null,
56
- "pruner": null,
57
- "quantizer": null,
58
- "cacher": "deepcache",
59
- "compiler": "stable_fast",
60
- "batcher": null
61
  }
62
  }
63
  ```
@@ -67,5 +86,5 @@ The compression configuration of the model is stored in the `smash_config.json`
67
  [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
68
  [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
69
  [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
70
- [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.com/invite/rskEr4BZJx)
71
  [![Reddit](https://img.shields.io/reddit/subreddit-subscribers/PrunaAI?style=social)](https://www.reddit.com/r/PrunaAI/)
 
1
  ---
2
+ pipeline_tag: text-to-image
3
  tags:
4
  - pruna-ai
5
+ - safetensors
6
+ inference: false
7
  ---
8
 
9
  # Model Card for radames/smashed-stabilityai-sd-turbo
 
18
  pip install pruna
19
  ```
20
 
21
+ You can [use the library_name library to load the model](https://huggingface.co/radames/smashed-stabilityai-sd-turbo?library=library_name) but this might not include all optimizations by default.
22
 
23
  To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
24
 
25
  ```python
26
  from pruna import PrunaModel
27
 
28
+ loaded_model = PrunaModel.from_pretrained(
29
  "radames/smashed-stabilityai-sd-turbo"
30
  )
31
+ # we can then run inference using the methods supported by the base model
32
  ```
33
 
34
+ Alternatively, you can visit [the Pruna documentation](https://docs.pruna.ai/en/stable/) for more information.
35
 
36
  ## Smash Configuration
37
 
 
39
 
40
  ```bash
41
  {
42
+ "awq": false,
43
+ "c_generate": false,
44
+ "c_translate": false,
45
+ "c_whisper": false,
46
+ "deepcache": true,
47
+ "diffusers_int8": false,
48
+ "fastercache": false,
49
+ "flash_attn3": false,
50
+ "fora": false,
51
+ "gptq": false,
52
+ "half": false,
53
+ "hqq": false,
54
+ "hqq_diffusers": false,
55
+ "ifw": false,
56
+ "llm_int8": false,
57
+ "pab": false,
58
+ "qkv_diffusers": false,
59
+ "quanto": false,
60
+ "stable_fast": true,
61
+ "torch_compile": false,
62
+ "torch_dynamic": false,
63
+ "torch_structured": false,
64
+ "torch_unstructured": false,
65
+ "torchao": false,
66
+ "whisper_s2t": false,
67
  "deepcache_interval": 2,
68
  "batch_size": 1,
69
  "device": "cuda",
70
+ "device_map": null,
71
  "save_fns": [
72
  "save_before_apply"
73
  ],
 
75
  "diffusers"
76
  ],
77
  "reapply_after_load": {
78
+ "deepcache": true,
79
+ "stable_fast": true
 
 
 
 
80
  }
81
  }
82
  ```
 
86
  [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
87
  [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
88
  [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
89
+ [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/JFQmtFKCjd)
90
  [![Reddit](https://img.shields.io/reddit/subreddit-subscribers/PrunaAI?style=social)](https://www.reddit.com/r/PrunaAI/)
dtype_info.json CHANGED
@@ -1 +1 @@
1
- {"dtype": "float32"}
 
1
+ {"dtype": "float16"}
model_index.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
- "_class_name": "StableDiffusionImg2ImgPipeline",
3
- "_diffusers_version": "0.33.1",
4
  "_name_or_path": "stabilityai/sd-turbo",
5
  "feature_extractor": [
6
  null,
 
1
  {
2
+ "_class_name": "StableDiffusionPipeline",
3
+ "_diffusers_version": "0.36.0",
4
  "_name_or_path": "stabilityai/sd-turbo",
5
  "feature_extractor": [
6
  null,
scheduler/scheduler_config.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "_class_name": "EulerDiscreteScheduler",
3
- "_diffusers_version": "0.33.1",
4
  "beta_end": 0.012,
5
  "beta_schedule": "scaled_linear",
6
  "beta_start": 0.00085,
 
1
  {
2
  "_class_name": "EulerDiscreteScheduler",
3
+ "_diffusers_version": "0.36.0",
4
  "beta_end": 0.012,
5
  "beta_schedule": "scaled_linear",
6
  "beta_start": 0.00085,
smash_config.json CHANGED
@@ -1,13 +1,33 @@
1
  {
2
- "batcher": null,
3
- "cacher": "deepcache",
4
- "compiler": "stable_fast",
5
- "factorizer": null,
6
- "pruner": null,
7
- "quantizer": null,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  "deepcache_interval": 2,
9
  "batch_size": 1,
10
  "device": "cuda",
 
11
  "save_fns": [
12
  "save_before_apply"
13
  ],
@@ -15,11 +35,7 @@
15
  "diffusers"
16
  ],
17
  "reapply_after_load": {
18
- "factorizer": null,
19
- "pruner": null,
20
- "quantizer": null,
21
- "cacher": "deepcache",
22
- "compiler": "stable_fast",
23
- "batcher": null
24
  }
25
  }
 
1
  {
2
+ "awq": false,
3
+ "c_generate": false,
4
+ "c_translate": false,
5
+ "c_whisper": false,
6
+ "deepcache": true,
7
+ "diffusers_int8": false,
8
+ "fastercache": false,
9
+ "flash_attn3": false,
10
+ "fora": false,
11
+ "gptq": false,
12
+ "half": false,
13
+ "hqq": false,
14
+ "hqq_diffusers": false,
15
+ "ifw": false,
16
+ "llm_int8": false,
17
+ "pab": false,
18
+ "qkv_diffusers": false,
19
+ "quanto": false,
20
+ "stable_fast": true,
21
+ "torch_compile": false,
22
+ "torch_dynamic": false,
23
+ "torch_structured": false,
24
+ "torch_unstructured": false,
25
+ "torchao": false,
26
+ "whisper_s2t": false,
27
  "deepcache_interval": 2,
28
  "batch_size": 1,
29
  "device": "cuda",
30
+ "device_map": null,
31
  "save_fns": [
32
  "save_before_apply"
33
  ],
 
35
  "diffusers"
36
  ],
37
  "reapply_after_load": {
38
+ "deepcache": true,
39
+ "stable_fast": true
 
 
 
 
40
  }
41
  }
text_encoder/config.json CHANGED
@@ -5,6 +5,7 @@
5
  "attention_dropout": 0.0,
6
  "bos_token_id": 0,
7
  "dropout": 0.0,
 
8
  "eos_token_id": 2,
9
  "hidden_act": "gelu",
10
  "hidden_size": 1024,
@@ -18,7 +19,6 @@
18
  "num_hidden_layers": 23,
19
  "pad_token_id": 1,
20
  "projection_dim": 512,
21
- "torch_dtype": "float32",
22
- "transformers_version": "4.52.4",
23
  "vocab_size": 49408
24
  }
 
5
  "attention_dropout": 0.0,
6
  "bos_token_id": 0,
7
  "dropout": 0.0,
8
+ "dtype": "float16",
9
  "eos_token_id": 2,
10
  "hidden_act": "gelu",
11
  "hidden_size": 1024,
 
19
  "num_hidden_layers": 23,
20
  "pad_token_id": 1,
21
  "projection_dim": 512,
22
+ "transformers_version": "4.57.3",
 
23
  "vocab_size": 49408
24
  }
text_encoder/model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:67e013543d4fac905c882e2993d86a2d454ee69dc9e8f37c0c23d33a48959d15
3
- size 1361596304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc1827c465450322616f06dea41596eac7d493f4e95904dcb51f0fc745c4e13f
3
+ size 680820392
unet/config.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "_class_name": "UNet2DConditionModel",
3
- "_diffusers_version": "0.33.1",
4
  "_name_or_path": "/mnt/ssd2/huggingface/hub/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/unet",
5
  "act_fn": "silu",
6
  "addition_embed_type": null,
 
1
  {
2
  "_class_name": "UNet2DConditionModel",
3
+ "_diffusers_version": "0.36.0",
4
  "_name_or_path": "/mnt/ssd2/huggingface/hub/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/unet",
5
  "act_fn": "silu",
6
  "addition_embed_type": null,
unet/diffusion_pytorch_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:438da6db739c8651ec4152341b8133d6896db452ea27afdb6b9b0344f0c40532
3
- size 3463726504
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40ec400881e27d1376c7c95c5bd495f407b33756e80eb6365e301c33a07af6e5
3
+ size 1731904736
vae/config.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "_class_name": "AutoencoderKL",
3
- "_diffusers_version": "0.33.1",
4
  "_name_or_path": "/mnt/ssd2/huggingface/hub/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/vae",
5
  "act_fn": "silu",
6
  "block_out_channels": [
 
1
  {
2
  "_class_name": "AutoencoderKL",
3
+ "_diffusers_version": "0.36.0",
4
  "_name_or_path": "/mnt/ssd2/huggingface/hub/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/vae",
5
  "act_fn": "silu",
6
  "block_out_channels": [
vae/diffusion_pytorch_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2aa1f43011b553a4cba7f37456465cdbd48aab7b54b9348b890e8058ea7683ec
3
- size 334643268
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e4c08995484ee61270175e9e7a072b66a6e4eeb5f0c266667fe1f45b90daf9a
3
+ size 167335342