YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

flux-restoration

Inference-only release package for a unified blind and reference-based face restoration adapter built on FLUX.2-klein-base-4B.

Visual Results

Selected qualitative results from the bundled examples are shown below.

Each row shows the degraded input together with blind, single-reference, and multi-reference restoration results.

Identity Degraded Blind Ref-Single Ref-Multi
alyssa_chia/005
bill_gates/003
chen_baoguo/004
donald_trump/003
donald_trump/005
eddie_peng/001
elon_musk/001
gao_yuanyuan/005
jensen_huang/004
lei_jun/005
leslie_cheung/003
leslie_cheung/001
liu_yifei/001
liu_yifei/003
sam_altman/001
sam_altman/002
sam_altman/003
sam_altman/004
satya_nadella/002
sundar_pichai/003
tim_cook/002
tony_leung_chiu_wai/002
zhang_ziyi/004
zhao_hongfei/001
zhao_hongfei/003

Model

pretrained_models/lora_weights.safetensors

The LoRA path is relative. Do not hardcode an absolute path in code.

Download the FLUX.2-klein-base-4B snapshot and pass its directory via --model_dir, for example:

--model_dir /path/to/FLUX.2-klein-base-4B

The directory should contain:

flux-2-klein-base-4b.safetensors
vae/
text_encoder/
tokenizer/

Install

cd release/flux-restoration
pip install -r requirements.txt

Recommended runtime:

  • Python 3.9+
  • PyTorch 2.1+

Inference

scripts/infer.py

It supports two usage patterns:

  1. Direct single-image inference
  2. Batch inference from a manifest JSON

Modes:

  • blind
  • ref-single
  • ref-multi

Direct Single-Image Inference

Blind

CUDA_VISIBLE_DEVICES=0 python scripts/infer.py \
  --model_dir /path/to/FLUX.2-klein-base-4B \
  --mode blind \
  --degraded_image examples/lq/bill_gates/1.png \
  --output_dir outputs/demo

Single Reference

CUDA_VISIBLE_DEVICES=0 python scripts/infer.py \
  --model_dir /path/to/FLUX.2-klein-base-4B \
  --mode ref-single \
  --degraded_image examples/lq/elon_musk/3.png \
  --reference_image examples/hq/elon_musk/1.png \
  --output_dir outputs/demo

Multiple References

CUDA_VISIBLE_DEVICES=0 python scripts/infer.py \
  --model_dir /path/to/FLUX.2-klein-base-4B \
  --mode ref-multi \
  --degraded_image examples/lq/zhang_ziyi/3.png \
  --reference_image examples/hq/zhang_ziyi/1.png \
  --reference_image examples/hq/zhang_ziyi/2.png \
  --reference_image examples/hq/zhang_ziyi/4.png \
  --output_dir outputs/demo

Notes:

  • blind ignores any reference input.
  • ref-single uses the first provided --reference_image.
  • ref-multi uses up to --max_reference_images references. Default is 3.

Batch Inference From JSON

Three manifests are bundled under:

examples/manifests/
  blind.json
  ref_single.json
  ref_multi.json

Batch Blind

CUDA_VISIBLE_DEVICES=0 python scripts/infer.py \
  --model_dir /path/to/FLUX.2-klein-base-4B \
  --manifest_json examples/manifests/blind.json

Batch Single-Reference

CUDA_VISIBLE_DEVICES=0 python scripts/infer.py \
  --model_dir /path/to/FLUX.2-klein-base-4B \
  --manifest_json examples/manifests/ref_single.json

Batch Multi-Reference

CUDA_VISIBLE_DEVICES=0 python scripts/infer.py \
  --model_dir /path/to/FLUX.2-klein-base-4B \
  --manifest_json examples/manifests/ref_multi.json

Each manifest item contains:

  • degraded_image
  • target_image
  • reference_images
  • output_path
  • mode
  • sample metadata such as identity and index

Bundled Examples

Bundled example assets are organized as:

examples/
  lq/
    <identity>/
      1.png
      2.png
      ...
  hq/
    <identity>/
      1.png
      2.png
      ...
  manifests/
    blind.json
    ref_single.json
    ref_multi.json
    summary.json
  outputs/
    blind/
    ref_single/
    ref_multi/

Path convention:

  • degraded input comes from examples/lq/<identity>/<index>.png
  • references come from examples/hq/<identity>/<other_index>.png

The bundled manifests in examples/manifests/ point to these files and to the corresponding output locations under examples/outputs/.

Outputs

For manifest runs, predictions are written to the paths stored in the JSON files, for example:

examples/outputs/blind/bill_gates/001/pred.png
examples/outputs/ref_single/elon_musk/003/pred.png
examples/outputs/ref_multi/zhang_ziyi/003/pred.png

For direct single-image inference, outputs default to:

outputs/release_lora_ref/
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support