How to use from the
Use from the
Diffusers library
# Gated model: Login with a HF token with gated access permission
hf auth login
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("thehive/wide2D", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

wide2D

State of The Art model of Stable Diffusion version 1.5 that has been trained on 1k+ image from landscape anime/2D image..
The current model is finetuned with 2.0e-6 learning rate, 128.000 training steps.
Dataset has been preprocessed using Aspect Ratio Bucketing Tool.
I have utilized BLIP2 as a part of the training process as natural language prompts might be more effective.

Model Details

Sample Images

sample
sample
sample
sample
sample
sample

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train thehive/wide2D