How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("amonig/new_ed_1789520182", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

edge-maxxing-newdream-sdxl

This holds the baseline for the SDXL Nvidia GeForce RTX 4090 contest, which can be forked freely and optimized

Some recommendations are as follows:

  • Installing dependencies should be done in pyproject.toml, including git dependencies
  • Compiled models should be included directly in the repository(rather than compiling during loading), loading time matters far more than file sizes
  • Avoid changing src/main.py, as that includes mostly protocol logic. Most changes should be in models and src/pipeline.py
  • Change requirements.txt to add extra arguments to be used when installing the package

For testing, you need a docker container with pytorch and ubuntu 22.04, you can download your listed dependencies with pip install -r requirements.txt -e ., and then running start_inference

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support