Bald Converter LoRA Weights
Bald Converter is the LoRA-based in-context bald-generation module used by HairPort: In-context 3D-Aware Hair Import and Transfer for Images, accepted to ACM SIGGRAPH 2026.
This repository contains two FLUX.1 Kontext LoRA adapters for generating realistic bald versions of portrait images while preserving identity, facial structure, clothing, background, lighting, and camera style.
What This Repository Contains
| File | Size | Variant | Purpose |
|---|---|---|---|
bald_konvertor_wo_seg_000003400.safetensors |
171,969,368 bytes | wo_seg |
Fast two-panel bald conversion without segmentation preprocessing |
bald_konvertor_w_seg_000004900.safetensors |
171,969,368 bytes | w_seg |
Segmentation-guided four-panel refinement for higher-quality conversion |
The filename uses konvertor for historical compatibility with the released checkpoint names. The model/component name in the paper and codebase is Bald Converter or BaldKonverter.
Model Details
| Item | Value |
|---|---|
| Base model | black-forest-labs/FLUX.1-Kontext-dev |
| Adapter type | LoRA |
| Framework | Diffusers |
| Weight format | Safetensors |
| Training data | deepmancer/baldy |
| Primary task | Portrait bald conversion / hair removal |
| Released with | HairPort, ACM SIGGRAPH 2026 |
These files are adapter weights only. They do not include the base FLUX model, preprocessing models, or the HairPort inference code.
Modes
| Mode | Checkpoint | Speed | Quality | Preprocessing |
|---|---|---|---|---|
wo_seg |
bald_konvertor_wo_seg_000003400.safetensors |
Fast | Good | None |
w_seg |
bald_konvertor_w_seg_000004900.safetensors |
Slower | Best | SAM3, BEN2, and face parsing |
auto |
Uses both checkpoints | Slower | Best | Runs wo_seg, then refines with w_seg |
wo_seg assembles a two-panel image and asks FLUX to generate the bald version in the right panel. w_seg uses a four-panel layout with segmentation context to improve structure preservation and remove residual hair more reliably.
Installation
The recommended way to use these weights is through the HairPort source checkout:
git clone https://github.com/deepmancer/HairPort.git
cd HairPort
bash scripts/install.sh
If the base FLUX model or related gated assets require authentication, log in to Hugging Face before running inference:
huggingface-cli login
The LoRA checkpoints are downloaded automatically from this repository on first use.
Quick Start
Run Bald Converter from the HairPort repository root:
python -m hairport.bald_konverter.cli \
--input photo.jpg \
--output bald.png \
--mode auto
Fast mode without segmentation preprocessing:
python -m hairport.bald_konverter.cli \
--input photo.jpg \
--output bald_fast.png \
--mode wo_seg
Batch processing:
python -m hairport.bald_konverter.cli \
--input-dir ./faces \
--output-dir ./bald \
--mode auto
Save intermediate masks and FLUX input grids:
python -m hairport.bald_konverter.cli \
--input photo.jpg \
--output bald.png \
--mode auto \
--save-intermediates
Python API
from hairport.bald_konverter import BaldKonverterPipeline
pipeline = BaldKonverterPipeline(mode="auto", device="cuda")
result = pipeline("portrait.jpg", return_intermediates=True)
result.bald_image.save("bald.png")
# Optional debugging / visualization outputs
if result.flux_input_wo_seg is not None:
result.flux_input_wo_seg.save("flux_input_wo_seg.png")
if result.flux_input_w_seg is not None:
result.flux_input_w_seg.save("flux_input_w_seg.png")
pipeline.teardown()
Use a custom local checkpoint path if needed:
from hairport.bald_konverter import BaldKonverterPipeline
pipeline = BaldKonverterPipeline(
mode="wo_seg",
lora_path_wo_seg="/path/to/bald_konvertor_wo_seg_000003400.safetensors",
)
result = pipeline("portrait.jpg")
result.bald_image.save("bald.png")
Download Checkpoints Manually
from huggingface_hub import hf_hub_download
repo_id = "deepmancer/bald_konverter"
wo_seg_path = hf_hub_download(
repo_id=repo_id,
filename="bald_konvertor_wo_seg_000003400.safetensors",
)
w_seg_path = hf_hub_download(
repo_id=repo_id,
filename="bald_konvertor_w_seg_000004900.safetensors",
)
print(wo_seg_path)
print(w_seg_path)
Requirements
For inference through HairPort:
- Python >= 3.10
- CUDA GPU recommended
- Approximately 24 GB VRAM recommended for FLUX inference
torch,diffusers,transformers,safetensors, andhuggingface-hub- For
w_seg/auto: SAM3, BEN2, and face-parsing dependencies - Optional FLAME/SHeaP assets for FLAME-guided segmentation
Refer to the HairPort README for the current full dependency recipe.
Training Data
The Bald Converter LoRAs were trained with Baldy, a 6,400-sample synthetic paired dataset of hair/bald portraits generated for HairPort.
- Dataset:
deepmancer/baldy - Each sample includes paired
hair_imageandbald_imagecolumns, plus hair renders, background images, camera metadata, and rendering parameters.
Intended Use
These weights are intended for research and development in:
- bald conversion and hair removal
- hairstyle-transfer preprocessing
- 3D-aware hair import and transfer systems
- paired image-to-image generation experiments
- HairPort-style source-face preparation
Limitations And Responsible Use
Bald Converter is designed for portrait preprocessing and research use. Outputs may contain artifacts, incomplete hair removal, identity drift, or failures under unusual poses, occlusions, lighting, accessories, or image crops.
The model should not be used to misrepresent people, produce deceptive identity edits, or infer sensitive attributes. Users should review generated outputs before downstream use.
Related Resources
- Project page: deepmancer.github.io/HairPort
- Code: github.com/deepmancer/HairPort
- Baldy dataset: deepmancer/baldy
- Weights: deepmancer/bald_konverter
Citation
If you use Bald Converter, Baldy, or HairPort in your research, please cite:
@inproceedings{heidari2026hairport,
title = {HairPort: In-context 3D-Aware Hair Import and Transfer for Images},
author = {A. Heidari and A. Alimohammadi and W. Michel Pinto Lira and A. Bar-Lev and A. Mahdavi-Amiri},
booktitle = {ACM SIGGRAPH 2026},
year = {2026}
}
License
The LoRA weights in this repository are released under the Apache License 2.0. Use of the base model and any external preprocessing models is governed by their respective licenses and terms.
- Downloads last month
- -
Model tree for deepmancer/bald_konverter
Base model
black-forest-labs/FLUX.1-Kontext-dev