metadata
license: creativeml-openrail-m
datasets:
- sasha/prof_images_blip__SG161222-Realistic_Vision_V1.4
language:
- en
library_name: diffusers
tags:
- art
realistic-diffusion v1: The small diffusion model for realistic images
HyHorX/realistic-diffusion-v1 is the diffusers model trained to create realistic images, faster than SG161222/Realistic_Vision_V6.0_B1_noVAE and runwayml/stable-diffusion-v1 on most test, sometimes faster than segmind/tiny-sd.
Comparison
These are comparion for this model ran on T4 GPU, compared with segmind/tiny-sd, runwayml/stable-diffusion-v1-5 and SG161222/Realistic_Vision_V6.0_B1_noVAE:
Training info
Learning rate: 5e-5 (0.00005)
Batch size: 8
Number of steps: 1000
Training methond: Knowledge Distillation
Trained on: x1 T4 GPU
License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content
- The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
- You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
Example coe:
import torch
from diffusers import StableDiffusionPipeline
user = "A candid portrait of an elderly person with deep wrinkles, silver hair captured in natural sunlight, wearing a highly detailed coarse wool sweater, dust motes dancing in the light, soft natural backlight, realistic shadows, authentic expression, shot on 35mm film, grainy texture, masterpiece, ultra-realistic"
model_id="HyHorX/realistic-diffusion-v1"
neg_prompt="makeup, young, smooth skin, doll, plastic, fake, bad proportions, blurry, high contrast, artificial lighting."
pipe=StableDiffusionPipeline.from_pretrained(model_id,torch_dtype=torch.float16)
pipe=pipe.to("cuda")
prompt=user
image=pipe(prompt,negative_prompt=neg_prompt).images[0]
image

