Instructions to use Nacholmo/Counterfeit-V2.5-vae-swapped with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Nacholmo/Counterfeit-V2.5-vae-swapped with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Nacholmo/Counterfeit-V2.5-vae-swapped", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Fix deprecated float16/fp16 variant loading through new `version` API.
Hey Nacholmo π,
Your model repository seems to contain a fp16 branch to load the model in float16 precision. Loading fp16 versions from a branch instead of the main branch is deprecated and will eventually be forbidden. Instead, we strongly recommend to save fp16 versions of the model under .fp16. version files directly on the 'main' branch as enabled through this PR.This PR makes sure that your model repository allows the user to correctly download float16 precision model weights by adding fp16 model weights in both safetensors and PyTorch bin format:
pipe = DiffusionPipeline.from_pretrained(Nacholmo/Counterfeit-V2.5-vae-swapped, torch_dtype=torch.float16, variant='fp16')
For more information please have a look at: https://huggingface.co/docs/diffusers/using-diffusers/loading#checkpoint-variants.
We made sure you that you can safely merge this pull request.
Best, the 𧨠Diffusers team.