Text-to-Image
Diffusers
PyTorch
StableDiffusionPipeline
stable-diffusion
diffusion-models-class
dreambooth-hackathon
food
Instructions to use Prgckwb/jiro-style-diffusion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Prgckwb/jiro-style-diffusion with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Prgckwb/jiro-style-diffusion", dtype=torch.bfloat16, device_map="cuda") prompt = "a photo of jirostyle ramen noodles in the park" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Add `scale_factor` to vae config. (#1)
Browse files- Add `scale_factor` to vae config. (b385ab97d7cbd18a9cd42819a3372db08d11f83b)
Co-authored-by: Suraj Patil <valhalla@users.noreply.huggingface.co>
- vae/config.json +1 -0
vae/config.json
CHANGED
|
@@ -21,6 +21,7 @@
|
|
| 21 |
"norm_num_groups": 32,
|
| 22 |
"out_channels": 3,
|
| 23 |
"sample_size": 768,
|
|
|
|
| 24 |
"up_block_types": [
|
| 25 |
"UpDecoderBlock2D",
|
| 26 |
"UpDecoderBlock2D",
|
|
|
|
| 21 |
"norm_num_groups": 32,
|
| 22 |
"out_channels": 3,
|
| 23 |
"sample_size": 768,
|
| 24 |
+
"scaling_factor": 0.18215,
|
| 25 |
"up_block_types": [
|
| 26 |
"UpDecoderBlock2D",
|
| 27 |
"UpDecoderBlock2D",
|