johnrachwanpruna commited on
Commit
b5e119e
·
verified ·
1 Parent(s): 43c8e42

Add files using upload-large-folder tool

Browse files
Files changed (2) hide show
  1. README.md +8 -4
  2. smash_config.json +5 -1
README.md CHANGED
@@ -6,7 +6,7 @@ tags:
6
  - pruna_pro-ai
7
  ---
8
 
9
- # Model Card for PrunaAI/test-save-tiny-stable-diffusion-pipe-smashed-pro
10
 
11
  This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
12
 
@@ -18,7 +18,7 @@ First things first, you need to install the pruna library:
18
  pip install pruna_pro
19
  ```
20
 
21
- You can [use the diffusers library to load the model](https://huggingface.co/PrunaAI/test-save-tiny-stable-diffusion-pipe-smashed-pro?library=diffusers) but this might not include all optimizations by default.
22
 
23
  To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
24
 
@@ -26,7 +26,7 @@ To ensure that all optimizations are applied, use the pruna library to load the
26
  from pruna_pro import PrunaProModel
27
 
28
  loaded_model = PrunaProModel.from_pretrained(
29
- "PrunaAI/test-save-tiny-stable-diffusion-pipe-smashed-pro"
30
  )
31
  # we can then run inference using the methods supported by the base model
32
  ```
@@ -44,6 +44,7 @@ The compression configuration of the model is stored in the `smash_config.json`
44
  "batcher": null,
45
  "cacher": null,
46
  "compiler": null,
 
47
  "distiller": null,
48
  "distributer": null,
49
  "enhancer": null,
@@ -52,6 +53,7 @@ The compression configuration of the model is stored in the `smash_config.json`
52
  "pruner": null,
53
  "quantizer": null,
54
  "recoverer": null,
 
55
  "batch_size": 1,
56
  "device": "cpu",
57
  "device_map": null,
@@ -66,11 +68,13 @@ The compression configuration of the model is stored in the `smash_config.json`
66
  "distiller": null,
67
  "kernel": null,
68
  "cacher": null,
 
69
  "recoverer": null,
70
  "distributer": null,
71
  "compiler": null,
72
  "batcher": null,
73
- "enhancer": null
 
74
  }
75
  }
76
  ```
 
6
  - pruna_pro-ai
7
  ---
8
 
9
+ # Model Card for pruna-test/test-save-tiny-stable-diffusion-pipe-smashed-pro
10
 
11
  This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
12
 
 
18
  pip install pruna_pro
19
  ```
20
 
21
+ You can [use the diffusers library to load the model](https://huggingface.co/pruna-test/test-save-tiny-stable-diffusion-pipe-smashed-pro?library=diffusers) but this might not include all optimizations by default.
22
 
23
  To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
24
 
 
26
  from pruna_pro import PrunaProModel
27
 
28
  loaded_model = PrunaProModel.from_pretrained(
29
+ "pruna-test/test-save-tiny-stable-diffusion-pipe-smashed-pro"
30
  )
31
  # we can then run inference using the methods supported by the base model
32
  ```
 
44
  "batcher": null,
45
  "cacher": null,
46
  "compiler": null,
47
+ "decoder": null,
48
  "distiller": null,
49
  "distributer": null,
50
  "enhancer": null,
 
53
  "pruner": null,
54
  "quantizer": null,
55
  "recoverer": null,
56
+ "resampler": null,
57
  "batch_size": 1,
58
  "device": "cpu",
59
  "device_map": null,
 
68
  "distiller": null,
69
  "kernel": null,
70
  "cacher": null,
71
+ "resampler": null,
72
  "recoverer": null,
73
  "distributer": null,
74
  "compiler": null,
75
  "batcher": null,
76
+ "enhancer": null,
77
+ "decoder": null
78
  }
79
  }
80
  ```
smash_config.json CHANGED
@@ -2,6 +2,7 @@
2
  "batcher": null,
3
  "cacher": null,
4
  "compiler": null,
 
5
  "distiller": null,
6
  "distributer": null,
7
  "enhancer": null,
@@ -10,6 +11,7 @@
10
  "pruner": null,
11
  "quantizer": null,
12
  "recoverer": null,
 
13
  "batch_size": 1,
14
  "device": "cpu",
15
  "device_map": null,
@@ -24,10 +26,12 @@
24
  "distiller": null,
25
  "kernel": null,
26
  "cacher": null,
 
27
  "recoverer": null,
28
  "distributer": null,
29
  "compiler": null,
30
  "batcher": null,
31
- "enhancer": null
 
32
  }
33
  }
 
2
  "batcher": null,
3
  "cacher": null,
4
  "compiler": null,
5
+ "decoder": null,
6
  "distiller": null,
7
  "distributer": null,
8
  "enhancer": null,
 
11
  "pruner": null,
12
  "quantizer": null,
13
  "recoverer": null,
14
+ "resampler": null,
15
  "batch_size": 1,
16
  "device": "cpu",
17
  "device_map": null,
 
26
  "distiller": null,
27
  "kernel": null,
28
  "cacher": null,
29
+ "resampler": null,
30
  "recoverer": null,
31
  "distributer": null,
32
  "compiler": null,
33
  "batcher": null,
34
+ "enhancer": null,
35
+ "decoder": null
36
  }
37
  }