File size: 2,016 Bytes
7a5db81
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7f07063
7a5db81
 
 
 
 
7f07063
7a5db81
7f07063
7a5db81
 
 
 
 
7f07063
 
 
 
 
 
 
7a5db81
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7f07063
 
 
7a5db81
 
 
7f07063
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
instance_prompt: sectorSaveBox
---

# SectorSaveBox

<Gallery />

## About this LoRA

This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers, ComfyUI, or Replicate.

It was trained on [Replicate](https://replicate.com/) using AI toolkit: [ostris/flux-dev-lora-trainer](https://replicate.com/ostris/flux-dev-lora-trainer/train)

## Trigger words

You should use `sectorSaveBox` to trigger the image generation.

## Prompting Tips

You can combine it with environments, lighting styles, or action verbs. Examples:

- `sectorSaveBox, on a white wall, close-up product photo`
- `sectorSaveBox, a person holding the device indoors`
- `sectorSaveBox, studio-lit product showcase`

## Run this LoRA with an API using Replicate

```py
import replicate

input = {
    "prompt": "sectorSaveBox",
    "lora_weights": "https://huggingface.co/KAPPA66/sectorSaveBox/resolve/main/lora.safetensors"
}

output = replicate.run(
    "black-forest-labs/flux-dev-lora",
    input=input
)
for index, item in enumerate(output):
    with open(f"output_{index}.webp", "wb") as file:
        file.write(item.read())

from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('KAPPA66/sectorSaveBox', weight_name='lora.safetensors')
image = pipeline('sectorSaveBox').images[0]

## Training details

- Steps: 1000  
- Learning rate: 0.0004  
- LoRA rank: 16  

## Contribute your own examples

Use the [community tab](https://huggingface.co/KAPPA66/sectorSaveBox/discussions) to share images you've made with this LoRA.