metadata
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/8E - linh waits in a storm_00105_.jpeg
- text: '-'
output:
url: images/9C - Linh looks at an abandoned mineshaft _00788_.jpeg
- text: '-'
output:
url: images/17M - heartbreak _00114_.jpeg
- text: '-'
output:
url: images/17F - walking in strangers_00225_.jpeg
- text: '-'
output:
url: images/17A - Childhood enviorment _00951_.jpeg
- text: '-'
output:
url: images/16A - Jay changes_00062_.jpeg
- text: '-'
output:
url: images/16E - turning into the light _00058_.jpeg
- text: '-'
output:
url: images/9C - Linh looks at an abandoned mineshaft _00818_.jpeg
- text: '-'
output:
url: images/8C - linh sitting alone_00066_.jpeg
- text: '-'
output:
url: images/16CA - portrait cutaway_00066_.jpeg
- text: '-'
output:
url: images/15A - Linh and Jay on the train _00075_.jpeg
- text: '-'
output:
url: images/4A - glowing forms walking on the beach _00083_.jpeg
- text: '-'
output:
url: images/8E - linh waits in a storm_00086_.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: VLCM
license: apache-2.0
Velcium

- Prompt
- -

- Prompt
- -

- Prompt
- -

- Prompt
- -

- Prompt
- -

- Prompt
- -

- Prompt
- -

- Prompt
- -

- Prompt
- -

- Prompt
- -

- Prompt
- -

- Prompt
- -

- Prompt
- -
Model description
The VLCM (Velocium) Lora was trained on a dataset of 60 images that were created for the short film. This is a distilled model; the images generated as a dataset were made with a separate Lora pipeline, including my photography Lora, several anime Loras, and specific concepts and character Loras.
Use trigger word VLCM in prompt
Trigger words
You should use VLCM to trigger the image generation.
Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.