File size: 2,149 Bytes
9e2d367
 
 
 
 
 
 
 
 
 
 
fc43f70
9e2d367
 
fc43f70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9e2d367
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
instance_prompt: MLLW
widget:
- text: >-
    Generate an image of a humble scene in rural Thailand. In the center of the
    image, depict a man standing in front of a small, traditional Thai house
    with a thatched roof. The house should be simple, with wooden walls and a
    corrugated metal door. The thatched roof should be worn and slightly
    disheveled, with some straw or leaves sticking out. The MLLW with thin and
    lean ,long hair, The MLLW should be dressed in worn, earth-toned clothing,
    consisting of a loose-fitting farmer's shirt with buttons undone, revealing
    a plain white undershirt. His pants should be faded and patched in places,
    with a wide belt holding them up. He should wear scuffed and dusty boots
    that look like they've seen many years of hard work. The man's facial
    expression should be kind and weary, with deep lines etched on his face from
    years of working under the sun. He should have a gentle gaze, looking
    directly at the viewer with a sense of quiet dignity.
  output:
    url: images/example_8qkqns1vb.png

---

# Mllw

<Gallery />

Trained on Replicate using:

https://replicate.com/ostris/flux-dev-lora-trainer/train


## Trigger words
You should use `MLLW` to trigger the image generation.


## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BKKSPY/MLLW', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```

For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)