| | --- |
| | pipeline_tag: text-to-image |
| | tags: |
| | - safetensors |
| | - pruna-ai |
| | inference: false |
| | --- |
| | |
| | # Model Card for radames/smashed-stabilityai-sd-turbo |
| |
|
| | This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead. |
| |
|
| | ## Usage |
| |
|
| | First things first, you need to install the pruna library: |
| |
|
| | ```bash |
| | pip install pruna |
| | ``` |
| |
|
| | You can [use the library_name library to load the model](https://huggingface.co/radames/smashed-stabilityai-sd-turbo?library=library_name) but this might not include all optimizations by default. |
| |
|
| | To ensure that all optimizations are applied, use the pruna library to load the model using the following code: |
| |
|
| | ```python |
| | from pruna import PrunaModel |
| | |
| | loaded_model = PrunaModel.from_pretrained( |
| | "radames/smashed-stabilityai-sd-turbo" |
| | ) |
| | # we can then run inference using the methods supported by the base model |
| | ``` |
| |
|
| | Alternatively, you can visit [the Pruna documentation](https://docs.pruna.ai/en/stable/) for more information. |
| |
|
| | ## Smash Configuration |
| |
|
| | The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model. |
| |
|
| | ```bash |
| | { |
| | "awq": false, |
| | "c_generate": false, |
| | "c_translate": false, |
| | "c_whisper": false, |
| | "deepcache": true, |
| | "diffusers_int8": false, |
| | "fastercache": false, |
| | "flash_attn3": false, |
| | "fora": false, |
| | "gptq": false, |
| | "half": false, |
| | "hqq": false, |
| | "hqq_diffusers": false, |
| | "ifw": false, |
| | "llm_int8": false, |
| | "pab": false, |
| | "qkv_diffusers": false, |
| | "quanto": false, |
| | "stable_fast": true, |
| | "torch_compile": false, |
| | "torch_dynamic": false, |
| | "torch_structured": false, |
| | "torch_unstructured": false, |
| | "torchao": false, |
| | "whisper_s2t": false, |
| | "deepcache_interval": 2, |
| | "batch_size": 1, |
| | "device": "cuda", |
| | "device_map": null, |
| | "save_fns": [ |
| | "save_before_apply" |
| | ], |
| | "load_fns": [ |
| | "diffusers" |
| | ], |
| | "reapply_after_load": { |
| | "deepcache": true, |
| | "stable_fast": true |
| | } |
| | } |
| | ``` |
| |
|
| | ## 🌍 Join the Pruna AI community! |
| |
|
| | [](https://twitter.com/PrunaAI) |
| | [](https://github.com/PrunaAI) |
| | [](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) |
| | [](https://discord.gg/JFQmtFKCjd) |
| | [](https://www.reddit.com/r/PrunaAI/) |