Instructions to use furkansp1/example_1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use furkansp1/example_1 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("furkansp1/example_1") prompt = "PIXELBYTE" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
| license: other | |
| license_name: flux-1-dev-non-commercial-license | |
| license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md | |
| language: | |
| - en | |
| tags: | |
| - flux | |
| - diffusers | |
| - lora | |
| - replicate | |
| base_model: "black-forest-labs/FLUX.1-dev" | |
| pipeline_tag: text-to-image | |
| # widget: | |
| # - text: >- | |
| # prompt | |
| # output: | |
| # url: https://... | |
| instance_prompt: PIXELBYTE | |
| # Example_1 | |
| <Gallery /> | |
| ## About this LoRA | |
| This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. | |
| It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train | |
| ## Trigger words | |
| You should use `PIXELBYTE` to trigger the image generation. | |
| ## Run this LoRA with an API using Replicate | |
| ```py | |
| import replicate | |
| input = { | |
| "prompt": "PIXELBYTE", | |
| "lora_weights": "https://huggingface.co/furkansp1/example_1/resolve/main/lora.safetensors" | |
| } | |
| output = replicate.run( | |
| "black-forest-labs/flux-dev-lora", | |
| input=input | |
| ) | |
| for index, item in enumerate(output): | |
| with open(f"output_{index}.webp", "wb") as file: | |
| file.write(item.read()) | |
| ``` | |
| ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) | |
| ```py | |
| from diffusers import AutoPipelineForText2Image | |
| import torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') | |
| pipeline.load_lora_weights('furkansp1/example_1', weight_name='lora.safetensors') | |
| image = pipeline('PIXELBYTE').images[0] | |
| ``` | |
| For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) | |
| ## Training details | |
| - Steps: 800 | |
| - Learning rate: 0.0004 | |
| - LoRA rank: 8 | |
| ## Contribute your own examples | |
| You can use the [community tab](https://huggingface.co/furkansp1/example_1/discussions) to add images that show off what you’ve made with this LoRA. | |