Instructions to use davidberenstein1957/stable-diffusion-v1-4-smashed-1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use davidberenstein1957/stable-diffusion-v1-4-smashed-1 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("davidberenstein1957/stable-diffusion-v1-4-smashed-1", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Pruna AI
How to use davidberenstein1957/stable-diffusion-v1-4-smashed-1 with Pruna AI:
from pruna import PrunaModel pip install -U diffusers transformers accelerate
from pruna import PrunaModel import torch # switch to "mps" for apple devices pipe = PrunaModel.from_pretrained("davidberenstein1957/stable-diffusion-v1-4-smashed-1", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Model Card for davidberenstein1957/stable-diffusion-v1-4-smashed-1
This model was created using the pruna library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
Usage
First things first, you need to install the pruna library:
pip install "pruna[full]"
You can then load this model using the following code:
from pruna import PrunaModel
loaded_model = PrunaModel.from_hub(
"davidberenstein1957/stable-diffusion-v1-4-smashed-1"
)
After loading the model, you can use the inference methods of the original model.
Smash Configuration
The compression configuration of the model is stored in the smash_config.json file.
{
"batcher": null,
"cacher": "deepcache",
"compiler": null,
"pruner": null,
"quantizer": null,
"deepcache_interval": 2,
"max_batch_size": 1,
"device": "cuda",
"save_fns": [],
"load_fns": [
"diffusers"
],
"reapply_after_load": {
"pruner": null,
"quantizer": null,
"cacher": "deepcache",
"compiler": null,
"batcher": null
}
}
Model Configuration
The configuration of the model is stored in the *.json files.
{
"model_index": {
"_class_name": "StableDiffusionPipeline",
"_diffusers_version": "0.33.1",
"_name_or_path": "CompVis/stable-diffusion-v1-4",
"feature_extractor": [
"transformers",
"CLIPImageProcessor"
],
"image_encoder": [
null,
null
],
"requires_safety_checker": true,
"safety_checker": [
"stable_diffusion",
"StableDiffusionSafetyChecker"
],
"scheduler": [
"diffusers",
"PNDMScheduler"
],
"text_encoder": [
"transformers",
"CLIPTextModel"
],
"tokenizer": [
"transformers",
"CLIPTokenizer"
],
"unet": [
"diffusers",
"UNet2DConditionModel"
],
"vae": [
"diffusers",
"AutoencoderKL"
]
}
}
π Join the Pruna AI community!
- Downloads last month
- 2