Text-to-Image
Diffusers
Safetensors
English
StableDiffusionPipeline
Base Model
Photorealistic
Anime
Art
Realistic
Semi-Realistic
SG161222
diffusionfanatic1173
stable-diffusion
stable-diffusion-diffusers
Instructions to use Yntec/VisionVision with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Yntec/VisionVision with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Yntec/VisionVision", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Update model_index.json
Browse files- model_index.json +1 -1
model_index.json
CHANGED
|
@@ -16,7 +16,7 @@
|
|
| 16 |
],
|
| 17 |
"scheduler": [
|
| 18 |
"diffusers",
|
| 19 |
-
"
|
| 20 |
],
|
| 21 |
"text_encoder": [
|
| 22 |
"transformers",
|
|
|
|
| 16 |
],
|
| 17 |
"scheduler": [
|
| 18 |
"diffusers",
|
| 19 |
+
"DPMSolverMultistepScheduler"
|
| 20 |
],
|
| 21 |
"text_encoder": [
|
| 22 |
"transformers",
|