--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image instance_prompt: sectorSaveBox --- # SectorSaveBox ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers, ComfyUI, or Replicate. It was trained on [Replicate](https://replicate.com/) using AI toolkit: [ostris/flux-dev-lora-trainer](https://replicate.com/ostris/flux-dev-lora-trainer/train) ## Trigger words You should use `sectorSaveBox` to trigger the image generation. ## Prompting Tips You can combine it with environments, lighting styles, or action verbs. Examples: - `sectorSaveBox, on a white wall, close-up product photo` - `sectorSaveBox, a person holding the device indoors` - `sectorSaveBox, studio-lit product showcase` ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "sectorSaveBox", "lora_weights": "https://huggingface.co/KAPPA66/sectorSaveBox/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('KAPPA66/sectorSaveBox', weight_name='lora.safetensors') image = pipeline('sectorSaveBox').images[0] ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples Use the [community tab](https://huggingface.co/KAPPA66/sectorSaveBox/discussions) to share images you've made with this LoRA.