|
|
--- |
|
|
license: creativeml-openrail-m |
|
|
base_model: |
|
|
- stabilityai/stable-diffusion-2-1-base |
|
|
tags: |
|
|
- stable-diffusion |
|
|
- mnn |
|
|
- cpu |
|
|
- onnx |
|
|
- image-generation |
|
|
inference: false |
|
|
--- |
|
|
|
|
|
# Stable Diffusion 2.1 (MNN Conversion) |
|
|
|
|
|
This repository contains a **Stable Diffusion 2.1** model converted to the [MNN](https://github.com/alibaba/MNN) format for efficient inference on CPU. |
|
|
The base model is [`stabilityai/stable-diffusion-2-1-base`](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). |
|
|
|
|
|
## Conversion Process |
|
|
|
|
|
1. Exported the original model to **ONNX** using MNN: |
|
|
```bash |
|
|
cd mnn_path/transformers/diffusion/export |
|
|
python onnx_export.py \ |
|
|
--model_path stabilityai/stable-diffusion-2-1-base \ |
|
|
--output_path output/path |
|
|
|
|
|
2. Converted each component from **ONNX → MNN**: |
|
|
```bash |
|
|
# Clip (text encoder) |
|
|
./MNNConvert -f ONNX --modelFile ./text_encoder.onnx --MNNModel ./text_encoder.mnn --fp16 |
|
|
# Unet |
|
|
./MNNConvert -f ONNX --modelFile ./unet.onnx --MNNModel ./unet.mnn --weightQuantBits 8 --weightQuantAsymmetric --weightQuantBlock 32 |
|
|
# VAE decoder |
|
|
./MNNConvert -f ONNX --modelFile ./vae_decoder.onnx --MNNModel ./vae_decoder.mnn --fp16 |
|
|
|