| | --- |
| | library_name: diffusers |
| | tags: |
| | - pruna-ai |
| | --- |
| | |
| | # Model Card for alexgenovese/FLUX.1-Kontext-dev-Smashed |
| |
|
| | This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead. |
| | And [Flux Kontext Dev](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev) |
| |
|
| | ## Usage |
| |
|
| | First things first, you need to install the pruna library: |
| |
|
| | ```bash |
| | pip install pruna |
| | ``` |
| |
|
| | You can [use the diffusers library to load the model](https://huggingface.co/alexgenovese/FLUX.1-Kontext-smashed?library=diffusers) but this might not include all optimizations by default. |
| |
|
| | To ensure that all optimizations are applied, use the pruna library to load the model using the following code: |
| |
|
| | ```python |
| | from pruna import PrunaModel |
| | |
| | loaded_model = PrunaModel.from_hub( |
| | "alexgenovese/FLUX.1-Kontext-dev-smashed" |
| | ) |
| | ``` |
| |
|
| | After loading the model, you can use the inference methods of the original model. Take a look at the [documentation](https://pruna.readthedocs.io/en/latest/index.html) for more usage information. |
| |
|
| | ## Smash Configuration |
| |
|
| | The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model. |
| |
|
| | ```bash |
| | |
| | ``` |
| |
|
| | ## 🌍 Follow Me |
| |
|
| | [](https://twitter.com/alexgenovese) |
| | [](https://github.com/alexgenovese) |
| |
|