Instructions to use renderartist/simplevectorhidream with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use renderartist/simplevectorhidream with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("HiDream-ai/HiDream-I1-Full", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("renderartist/simplevectorhidream") prompt = "v3ct0r style, simple flat vector art, isolated, cute playful duckling front pose holding a sign that says \"Simple Vector HiDream\" in an orange font, the background is solid dark blue background with star patterns" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Update README.md
Browse files
README.md
CHANGED
|
@@ -73,7 +73,7 @@ This model was trained to 2500 steps, 2 repeats with a learning rate of 4e-4 tra
|
|
| 73 |
|
| 74 |
Training took around 3 hours using an RTX 4090 with 24GB VRAM, training times are on par with Flux LoRA training. Captioning was done using Joy Caption Batch with modified instructions and a token limit of 128 tokens (more than that gets truncated during training).
|
| 75 |
|
| 76 |
-
I trained the model with Full and ran inference in ComfyUI using the Dev model, it is said that this is the best strategy to get high quality outputs.
|
| 77 |
|
| 78 |
Testing and training takes a lot of time and personal resources. If you can afford it please contribute to my KoFi (https://ko-fi.com/renderartist) – Contributing will allow me more flexibility to train in the cloud and continue experimenting and sharing.
|
| 79 |
renderartist.com
|
|
|
|
| 73 |
|
| 74 |
Training took around 3 hours using an RTX 4090 with 24GB VRAM, training times are on par with Flux LoRA training. Captioning was done using Joy Caption Batch with modified instructions and a token limit of 128 tokens (more than that gets truncated during training).
|
| 75 |
|
| 76 |
+
I trained the model with Full and ran inference in ComfyUI using the Dev model, it is said that this is the best strategy to get high quality outputs. Workflow is attached to first image in the gallery, just drag and drop into ComfyUI.
|
| 77 |
|
| 78 |
Testing and training takes a lot of time and personal resources. If you can afford it please contribute to my KoFi (https://ko-fi.com/renderartist) – Contributing will allow me more flexibility to train in the cloud and continue experimenting and sharing.
|
| 79 |
renderartist.com
|