| | --- |
| | language: |
| | - en |
| | base_model: |
| | - briaai/Fibo-Edit |
| | pipeline_tag: image-to-image |
| | library_name: diffusers |
| | extra_gated_description: >- |
| | Bria AI Model weights are open source for non commercial use only, per the |
| | provided [license](https://creativecommons.org/licenses/by-nc/4.0/deed.en). |
| | extra_gated_heading: Fill in this form to immediately access the model for non commercial use |
| | extra_gated_fields: |
| | Name: text |
| | Email: text |
| | Company/Org name: text |
| | Company Website URL: text |
| | Discord user: text |
| | I agree to BRIA's Privacy policy, Terms & conditions, and acknowledge Non commercial use to be Personal use / Academy / Non profit (direct or indirect): checkbox |
| | license: other |
| | license_name: bria-fibo-edit |
| | license_link: https://creativecommons.org/licenses/by-nc/4.0/deed.en |
| | tags: |
| | - art |
| | - image-to-image |
| | - image-editing |
| | - inpainting |
| | --- |
| | <p align="center"> |
| | <img src="https://bria-public.s3.us-east-1.amazonaws.com/Bria-logo.svg" width="200"/> |
| | </p> |
| |
|
| | <p align="center"> |
| | <a href="https://github.com/Bria-AI/" target="_blank"> |
| | <img |
| | alt="GitHub Repo" |
| | src="https://img.shields.io/badge/GitHub-Repo-181717?logo=github&logoColor=white&style=for-the-badge" |
| | /> |
| | </a> |
| | |
| | |
| | <a href="https://huggingface.co/spaces/briaai/Fibo-Edit" target="_blank"> |
| | <img |
| | alt="Hugging Face Demo" |
| | src="https://img.shields.io/badge/Hugging%20Face-Demo-FFD21E?logo=huggingface&logoColor=black&style=for-the-badge" |
| | /> |
| | </a> |
| | |
| | |
| | <a href="https://platform.bria.ai" target="_blank"> |
| | <img |
| | alt="Bria Platform" |
| | src="https://img.shields.io/badge/Bria-Platform-0EA5E9?style=for-the-badge" |
| | /> |
| | </a> |
| | |
| | |
| | <a href="https://discord.com/invite/Nxe9YW9zHS" target="_blank"> |
| | <img |
| | alt="Bria Discord" |
| | src="https://img.shields.io/badge/Discord-Join-5865F2?logo=discord&logoColor=white&style=for-the-badge" |
| | /> |
| | </a> |
| | </p> |
| | |
| | <p align="center"> |
| | <img src="https://bria-public.s3.us-east-1.amazonaws.com/Edit+Assets/RecolorHero.jpeg" width="1024" alt="Fibo Edit Hero Image"/> |
| | </p> |
| |
|
| | <p align="center"> |
| | <b>FIBO-Edit brings the power of structured prompt generation to image editing.</b><br> |
| | Built on Fibo's</a> foundation and of JSON-native control, FIBO-Edit delivers precise, deterministic, and fully controllable edits. No ambiguity, no surprises. |
| | <b></b> |
| | <br><br> |
| | </p> |
| |
|
| | <h2>🌍 What's Fibo Edit?</h2> |
| | <p>Most image editing models rely on loose, ambiguous text prompts, but not FIBO-Edit. FIBO-Edit introduces a new paradigm of structured control, operating on structured JSON inputs paired with a source image (and optionally a mask). This enables explicit, interpretable, and repeatable editing workflows optimized for professional production environments.</p> |
| |
|
| | <p>Developed by Bria AI, FIBO-Edit prioritizes transparency, legal safety, and granular control: ranking among the top models in open benchmarks for prompt adherence and quality.</p> |
| |
|
| |
|
| | <p>📄 <i>Technical report coming soon.</i> For architecture details, see <a href="https://huggingface.co/briaai/FIBO">FIBO</a>.</p> |
| |
|
| |
|
| | <h2>📐 The VGL Paradigm</h2> |
| | <p>FIBO-Edit is natively built on <a href="https://docs.bria.ai/vgl">Visual GenAI Language (VGL)</a>. VGL standardizes image generation by replacing vague natural language descriptions with explicit, human-machine-readable JSON. By disentangling visual elements—such as lighting, composition, style, and camera parameters—VGL transforms editing from a probabilistic guessing game into a deterministic engineering task. Fibo-Edit reads these structured blueprints to perform precise updates without prompt drift, ensuring the output matches your exact specifications.</p> |
| |
|
| |
|
| | <h2> News</h2> |
| | <ul> |
| | <li>2026-1-16: Fibo Edit released on Hugging Face 🎉</li> |
| | <li>2026-1-16: Integrated with Diffusers library 🧨</li> |
| | </ul> |
| |
|
| |
|
| | <h2>🔑 Key Features</h2> |
| | <ul> |
| | <li><b>Structured JSON Control</b>: Move beyond "prompt drift." Define edits with explicit parameters (lighting, composition, style) using a structured JSON format for deterministic results.</li> |
| | <li><b>Native Masking</b>: Built-in support for mask-based editing allows you to target specific regions of an image with pixel-perfect precision, leaving the rest untouched.</li> |
| | <li><b>Production-Ready Architecture</b>: At 8B parameters, the model balances high-fidelity output with the speed and efficiency required for commercial pipelines.</li> |
| | <li><b>Deep Customization</b>: The lightweight architecture empowers researchers to build specialized "Edit" models for domain-specific tasks without compromising quality.</li> |
| | <li><b>Responsible & Licensed</b>: Trained exclusively on fully licensed data, ensuring zero copyright infringement risks for commercial users.</li> |
| | </ul> |
| |
|
| | <h2>⚡ Quick Start</h2> |
| |
|
| | <p align="center"> |
| | 🚀 <a href="https://github.com/bria-ai/fibo-edit">Try Fibo Edit now →</a> |
| | </p> |
| |
|
| | <p>Fibo Edit is available everywhere you build, either as source-code and weights, ComfyUI nodes or API endpoints.</p> |
| |
|
| | <p><b>API Endpoint:</b></p> |
| | <ul> |
| | <li><a href="https://docs.bria.ai/image-editing/v2-endpoints/edit-image">Bria.ai</a></li> |
| | <li><a href="https://fal.ai/models/bria/fibo-edit/edit">Fal.ai</a></li> |
| | <li><a href="https://replicate.com/bria/fibo-edit">Replicate (coming soon)</a></li> |
| | <li><a href="https://platform.bria.ai/labs/fibo-edit">Bria Fibo Lab</a></li> |
| | </ul> |
| |
|
| | <p><b>Source-Code & Weights</b></p> |
| | <ul> |
| | <li>The model is open source for non-commercial use with <a href="https://creativecommons.org/licenses/by-nc/4.0/deed.en">this license</a> </li> |
| | <li>For commercial use <a href="https://bria.ai/contact-us?hsCtaAttrib=114250296256">Click here</a>.</li> |
| | </ul> |
| |
|
| | <h2>Quick Start Guide</h2> |
| | <p> clone the repository and install the requirements</p> |
| | <pre><code class="language-bash">git clone https://github.com/Bria-AI/Fibo-Edit.git |
| | cd Fibo-Edit |
| | </code></pre> |
| | <p>install the requirements</p> |
| | <pre><code class="language-bash">uv sync |
| | source .venv/bin/activate |
| | export PYTHONPATH=. |
| | </code></pre> |
| |
|
| | <h3>Promptify Setup</h3> |
| | <p>The repository supports two modes for generating structured JSON prompts:</p> |
| |
|
| | <p><b>API Mode (default):</b> Uses Gemini as the VLM. Set your API key:</p> |
| | <pre><code class="language-bash">export GEMINI_API_KEY="your-api-key" |
| | </code></pre> |
| |
|
| | <p><b>Local Mode:</b> Uses a local VLM model (<a href="https://huggingface.co/briaai/FIBO-edit-prompt-to-JSON">briaai/FIBO-edit-prompt-to-JSON</a>) via diffusers ModularPipelineBlocks. Runs locally on your GPU.</p> |
| |
|
| | <p><b>Note:</b> Local VLM mode does not support mask-based editing. Use API mode for masked edits.</p> |
| |
|
| | <h3>Image + Mask (API Mode)</h3> |
| |
|
| | <img src="https://bria-public.s3.us-east-1.amazonaws.com/Edit+Assets/Masked.png" alt="Benchmark Chart" width="800"/> |
| |
|
| | ```python |
| | import torch |
| | from diffusers import BriaFiboEditPipeline |
| | from PIL import Image |
| | from src.edit_promptify import get_prompt |
| | |
| | # Generate structured JSON using API mode |
| | image = Image.open("photo.jpg") |
| | mask_image = Image.open("mask.jpg").convert("L") |
| | edit_json = get_prompt(image=image, instruction="make it look vintage", mask_image=mask_image) |
| | |
| | # Use with Fibo Edit |
| | pipe = BriaFiboEditPipeline.from_pretrained( |
| | "briaai/Fibo-Edit", |
| | torch_dtype=torch.bfloat16, |
| | ) |
| | pipe.to("cuda") |
| | |
| | result = pipe( |
| | image=image, |
| | mask=mask, |
| | prompt=edit_json, |
| | num_inference_steps=50, |
| | guidance_scale=5 |
| | ).images[0] |
| | |
| | result.save("edited.png") |
| | ``` |
| |
|
| |
|
| | <h3>Only Image (API Mode)</h3> |
| |
|
| | <img src="https://bria-public.s3.us-east-1.amazonaws.com/Edit+Assets/RemoveObjects.png" alt="Benchmark Chart" width="800"/> |
| |
|
| | ```python |
| | import torch |
| | from diffusers import BriaFiboEditPipeline |
| | from PIL import Image |
| | from src.edit_promptify import get_prompt |
| | |
| | # Generate structured JSON using API mode |
| | image = Image.open("photo.jpg") |
| | edit_json = get_prompt(image, "make it look vintage") |
| | |
| | # Use with Fibo Edit |
| | pipe = BriaFiboEditPipeline.from_pretrained( |
| | "briaai/Fibo-Edit", |
| | torch_dtype=torch.bfloat16, |
| | ) |
| | pipe.to("cuda") |
| | |
| | result = pipe( |
| | image=image, |
| | prompt=edit_json, |
| | num_inference_steps=50, |
| | guidance_scale=5 |
| | ).images[0] |
| | |
| | result.save("edited.png") |
| | ``` |
| |
|
| | <h3>Only Image (Local VLM Mode)</h3> |
| |
|
| | ```python |
| | import torch |
| | from diffusers import BriaFiboEditPipeline |
| | from PIL import Image |
| | from src.edit_promptify import get_prompt |
| | |
| | # Generate structured JSON using local VLM |
| | image = Image.open("photo.jpg") |
| | edit_json = get_prompt( |
| | image=image, |
| | instruction="make it look vintage", |
| | vlm_mode="local", |
| | model="briaai/FIBO-edit-prompt-to-JSON" |
| | ) |
| | |
| | # Use with Fibo Edit |
| | pipe = BriaFiboEditPipeline.from_pretrained( |
| | "briaai/Fibo-Edit", |
| | torch_dtype=torch.bfloat16, |
| | ) |
| | pipe.to("cuda") |
| | |
| | result = pipe( |
| | image=image, |
| | prompt=edit_json, |
| | num_inference_steps=50, |
| | guidance_scale=5 |
| | ).images[0] |
| | |
| | result.save("edited.png") |
| | ``` |
| |
|
| | <h3>CLI Usage</h3> |
| |
|
| | ```bash |
| | # API mode (default) |
| | python src/example_edit.py --images photo.jpg --instructions "change the car color to green" |
| | |
| | # Local VLM mode |
| | python src/example_edit.py --vlm-mode local --vlm-model briaai/FIBO-edit-prompt-to-JSON \ |
| | --images photo.jpg --instructions "change the car color to green" |
| | ``` |
| |
|
| | ## More Examples |
| | <table> |
| | <tr> |
| | <td align="center"> |
| | <img |
| | src="https://bria-public.s3.us-east-1.amazonaws.com/Edit+Assets/Relight.gif" |
| | width="400" |
| | alt="Relight" |
| | /> |
| | </td> |
| | <td align="center"> |
| | <img |
| | src="https://bria-public.s3.us-east-1.amazonaws.com/Edit+Assets/Restyle.gif" |
| | width="400" |
| | alt="Restyle" |
| | /> |
| | </td> |
| | </tr> |
| | </table> |
| | <table> |
| | <tr> |
| | <td align="center"> |
| | <img |
| | src="https://bria-public.s3.us-east-1.amazonaws.com/Edit+Assets/Retype.gif" |
| | width="400" |
| | alt="Retype" |
| | /> |
| | </td> |
| | <td align="center"> |
| | <img |
| | src="https://bria-public.s3.us-east-1.amazonaws.com/Edit+Assets/Recolor.gif" |
| | width="400" |
| | alt="Recolor" |
| | /> |
| | </td> |
| | </tr> |
| | </table> |
| | |
| | </div> |
| | <h3>Advanced Usage</h3> |
| | <details> |
| | <summary>VLM Options</summary> |
| | <p>FIBO supports multiple VLM options for generating structured JSON prompts:</p> |
| |
|
| | <h4>Option 1: Gemini API (Default)</h4> |
| | <p>To use Gemini as VLM backbone for FIBO, follow these instructions:</p> |
| | <ol> |
| | <li> |
| | <p><b>Obtain a Gemini API Key</b><br/> |
| | Sign up for the <a href="https://aistudio.google.com/app/apikey">Google AI Studio (Gemini)</a> and create an API key.</p> |
| | </li> |
| | <li> |
| | <p><b>Set the API Key as an Environment Variable</b><br/> |
| | Store your Gemini API key in the <code>GEMINI_API_KEY</code> environment variable:</p> |
| | <pre><code class="language-bash">export GEMINI_API_KEY=your_gemini_api_key |
| | </code></pre> |
| | <p>You can add the above line to your <code>.bashrc</code>, <code>.zshrc</code>, or similar shell profile for persistence.</p> |
| | </li> |
| | </ol> |
| | <pre><code class="language-python"># API mode is the default |
| | edit_json = get_prompt(image=image, instruction="your edit instruction") |
| | |
| | # Or explicitly specify |
| | edit_json = get_prompt( |
| | image=image, |
| | instruction="your edit instruction", |
| | vlm_mode="api", |
| | model="gemini/gemini-2.5-flash" |
| | ) |
| | </code></pre> |
| | |
| | <h4>Option 2: Local VLM</h4> |
| | <p>Use the <a href="https://huggingface.co/briaai/FIBO-edit-prompt-to-JSON">briaai/FIBO-edit-prompt-to-JSON</a> model locally.</p> |
| | <pre><code class="language-python">edit_json = get_prompt( |
| | image=image, |
| | instruction="your edit instruction", |
| | vlm_mode="local", |
| | model="briaai/FIBO-edit-prompt-to-JSON" |
| | ) |
| | </code></pre> |
| | <p><b>Note:</b> Local mode does not support mask-based editing.</p> |
| | </details> |
| | <p>see the examples in the <a href="examples">examples</a> directory for more details.</p> |
| | |
| | <p>If you have questions about this repository, feedback to share, or want to contribute directly, we welcome your issues and pull requests on GitHub. Your contributions help make FIBO better for everyone.</p> |
| | <p>If you're passionate about fundamental research, we're hiring full-time employees (FTEs) and research interns. Don't wait - reach out to us at hr@bria.ai</p> |
| |
|
| | ## Citation |
| |
|
| | ```bibtex |
| | @article{gutflaish2025generating, |
| | title={Generating an Image From 1,000 Words: Enhancing Text-to-Image With Structured Captions}, |
| | author={Gutflaish, Eyal and Kachlon, Eliran and Zisman, Hezi and Hacham, Tal and Sarid, Nimrod and Visheratin, Alexander and Huberman, Saar and Davidi, Gal and Bukchin, Guy and Goldberg, Kfir and others}, |
| | journal={arXiv preprint arXiv:2511.06876}, |
| | year={2025} |
| | } |
| | ``` |
| | <p align="center"><b>❤️ FIBO model card and ⭐ Star FIBO on GitHub to join the movement for responsible generative AI!</b></p> |
| |
|