Flow Matching Checkpoints (CelebA-64)

Model checkpoints from "From Diffusion to One-Step Generation: A Controlled Study of Flow Matching, Rectified Flow, and Inference Acceleration on CelebA-64".

Code: github.com/kshitiz-1225/flowmatch-code

Architecture

  • Model: UNet (77.7M parameters)
  • Dataset: CelebA-64Γ—64
  • Framework: PyTorch

Checkpoint Index

Teacher

File Description
teacher/2rf_rfpp.pt 2-rectified-flow teacher (RFPP), used to generate all reflow pairs

Core Ablations (Reflow Factorial Study)

Experiment Config Checkpoints KID@1 (Γ—1e-3)
ablation_A_uniform_mse Uniform + MSE final, 50k, 100k 8.01
ablation_B_ushaped_mse U-shaped + MSE final, 50k, 100k 8.19
ablation_C_uniform_lpips_huber Uniform + LPIPS-Huber final, 50k, 100k ~7.76
ablation_D_ushaped_lpips_huber U-shaped + LPIPS-Huber final, 50k, 100k 8.21

Extended Ablations

Experiment Config Checkpoints
ext_B2_beta2_mse Beta(2,2) + MSE final, 50k, 100k
ext_Bfix_rfpp_tdist_mse RFPP t-dist + MSE final, 50k, 100k
ext_Cext_uniform_lpips_huber_200k Uniform + LPIPS-Huber (200k iters) final, 150k, 200k
ext_D2_beta2_lpips_huber Beta(2,2) + LPIPS-Huber final, 100k, 200k
ext_Dext_ushaped_lpips_huber_200k U-shaped + LPIPS-Huber (200k) final, 150k, 200k
ext_Dfix_rfpp_tdist_lpips_huber RFPP t-dist + LPIPS-Huber final, 100k, 200k

Acceleration Methods

Experiment Method KID@1 (Γ—1e-3) KID@2 KID@5
ext_I_ect Easy Consistency Tuning 6.56 5.76 5.43
ext_J_consistency_fm Consistency Flow Matching 6.75 5.30 4.71
ext_K_pcm Phased Consistency Model 6.64 5.98 5.60
ext_M_meanflow MeanFlow 9.02 6.49 5.53
ext_G_shortcut Shortcut conditioning 415.5 β€” β€”
ext_F_self_distill Self-distillation 92.3 β€” β€”
ext_L_gan_distill GAN distillation ~20 β€” β€”

Timestep Distribution Variants

Experiment Description
ext_H_adaptive_t Learned adaptive timestep schedule
ext_H_uniformhist_lpips Uniform histogram + LPIPS
ext_H_mixhist_lpips Mixed histogram + LPIPS
ext_H_rfpphist_lpips RFPP-shaped histogram + LPIPS
ext_H_adaptive_t_importance_weighted Importance-weighted adaptive
ext_H_adaptive_t_objective_shaping Objective-shaped LPIPS

Distillation Variants

Experiment Description
ext_F_preempt_s4 Progressive distill, stride=4
ext_F_preempt_s8 Progressive distill, stride=8
ext_F_preempt_s8_lp02 Progressive distill, stride=8, LPIPS=0.2
ext_F_preempt_s16 Progressive distill, stride=16

Baseline

Experiment Description
hw3_reflow_baseline Flow matching teacher β†’ reflow baseline (uniform+MSE, 100k)

Usage

import torch
from huggingface_hub import hf_hub_download

# Download a specific checkpoint
path = hf_hub_download(
    repo_id="loralover/xv7k-fm-weights-archive",
    filename="archive/ablation_A_uniform_mse/reflow_final.pt",
)
checkpoint = torch.load(path, map_location="cpu")

Directory Structure

teacher/                    # 2-rectified-flow teacher
archive/<experiment>/       # From checkpoint archive (3 ckpts each)
logs/<experiment>/          # From training logs (1-3 ckpts each)

Citation

See the code repository for full details.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support