How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("MirageML/lowpoly-landscape", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Low Poly Landscape on Stable Diffusion via Dreambooth

This the Stable Diffusion model fine-tuned the Low Poly Landscape concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the instance_prompt: a photo of lowpoly_landscape

Run on Mirage

Run this model and explore text-to-3D on Mirage!

Here are is a sample output for this model: image 0

Share your Results and Reach us on Discord!

Discord Server

Image Source

Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Space using MirageML/lowpoly-landscape 1