Datasets:
metadata
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: hairstyle_id
dtype: string
- name: view
dtype: string
- name: source
dtype: string
- name: hairstyle_source_id
dtype: string
- name: hair_image
dtype: image
- name: bald_image
dtype: image
- name: hair_render
dtype: image
- name: background_image
dtype: image
- name: render_params_json
dtype: string
- name: background_prompt
dtype: string
- name: camera_focal_length
dtype: float64
- name: camera_location_x
dtype: float64
- name: camera_location_y
dtype: float64
- name: camera_location_z
dtype: float64
- name: camera_rotation_x
dtype: float64
- name: camera_rotation_y
dtype: float64
- name: camera_rotation_z
dtype: float64
- name: lighting_preset
dtype: string
- name: body_gender
dtype: string
- name: face_expression
dtype: string
- name: hair_melanin
dtype: float64
- name: hair_roughness
dtype: float64
- name: has_garments
dtype: bool
- name: views_available
dtype: string
splits:
- name: train
num_examples: 6400
license: cc-by-4.0
task_categories:
- image-to-image
tags:
- hair
- bald
- paired-data
- synthetic
- blender
- controlnet
- smplx
- 3d-rendering
pretty_name: Baldy Dataset
size_categories:
- 1K<n<10K
Baldy Dataset
A synthetic paired dataset of hair/bald image pairs for bald reconstruction research, generated from physically modeled hairstyles rendered on SMPL-X bodies in Blender.
Dataset Description
The Baldy dataset provides pixel-aligned, identity-consistent pairs of images showing the same person with and without hair. It was created using a multi-stage pipeline:
- 3D Hair Modeling: Hairstyles from DiffLocks, Hair20K, USC-HairSalon, and CT2Hair are aligned to SMPL-X head meshes with varying poses, expressions, and optional BEDLAM clothing.
- Blender Rendering: Hair is rendered under diverse camera views, lighting conditions, and material settings at 1024×1024 resolution.
- Photorealistic Generation: ControlNet++ and SDXL-based pipelines generate photorealistic composites, with Flux Kontext refinement for identity preservation.
Dataset Statistics
Total samples: 6400
Samples per View
| View | Count |
|---|---|
| back | 99 |
| front | 6009 |
| side | 292 |
Samples per Hairstyle Source
| Source | Count |
|---|---|
| ct2hair | 9 |
| difflocks | 3197 |
| hair20k | 2824 |
| usc | 370 |
Data Fields
| Field | Type | Description |
|---|---|---|
hairstyle_id |
string |
Unique zero-padded sequential ID (e.g., "000042") |
view |
string |
Camera view: "front", "back", or "side" |
source |
string |
Hairstyle source: "difflocks", "hair20k", "usc", or "ct2hair" |
hairstyle_source_id |
string |
Source-specific hairstyle identifier |
hair_image |
Image |
Photorealistic image of the person with hair (1024×1024 PNG, auto-loaded) |
bald_image |
Image |
Photorealistic image of the same person without hair (1024×1024 PNG, auto-loaded) |
hair_render |
Image |
Blender-rendered hair on transparent background (1024×1024 PNG, auto-loaded) |
background_image |
Image |
Scene background image (1024×1024 PNG, auto-loaded) |
render_params_json |
string |
Full Blender render parameters as embedded JSON string |
background_prompt |
string |
Text prompt used to generate the background (empty string if unavailable) |
camera_focal_length |
float |
Camera focal length (mm) |
camera_location_x |
float |
Camera X position in Blender world space |
camera_location_y |
float |
Camera Y position in Blender world space |
camera_location_z |
float |
Camera Z position in Blender world space |
camera_rotation_x |
float |
Camera X rotation in radians |
camera_rotation_y |
float |
Camera Y rotation in radians |
camera_rotation_z |
float |
Camera Z rotation in radians |
lighting_preset |
string |
Lighting preset name (e.g., "studio", "outdoor") |
body_gender |
string |
Subject gender from SMPL-X body configuration |
face_expression |
string |
Facial expression (e.g., "angry", "neutral"; empty string for some DiffLocks samples) |
hair_melanin |
float |
Hair melanin value controlling color darkness |
hair_roughness |
float |
Hair surface roughness |
has_garments |
bool |
Whether BEDLAM clothing is applied to the body |
views_available |
string |
All camera views available for this hairstyle (pipe-separated, e.g., "front|back|side") |
Usage
from datasets import load_dataset
# Load the full dataset
ds = load_dataset("deepmancer/baldy", split="train")
# Images are automatically decoded as PIL Images
sample = ds[0]
sample["hair_image"] # PIL.Image (person with hair, auto-loaded)
sample["bald_image"] # PIL.Image (person without hair, auto-loaded)
sample["hair_render"] # PIL.Image (Blender hair render, auto-loaded)
sample["background_image"] # PIL.Image (scene background, auto-loaded)
# Filter by view
front = ds.filter(lambda x: x["view"] == "front")
back = ds.filter(lambda x: x["view"] == "back")
side = ds.filter(lambda x: x["view"] == "side")
# Filter by hairstyle source
hair20k = ds.filter(lambda x: x["source"] == "hair20k")
# Access render parameters (embedded JSON)
import json
render_params = json.loads(sample["render_params_json"])
# Stream the dataset (no full download needed)
ds_stream = load_dataset("deepmancer/baldy", split="train", streaming=True)
for sample in ds_stream:
print(sample["hairstyle_id"])
break
File Structure
The dataset is stored as sharded Parquet files with embedded images (~1GB per shard):
data/
├── train-00000-of-NNNNN.parquet
├── train-00001-of-NNNNN.parquet
├── ...
└── train-NNNNN-of-NNNNN.parquet
Each Parquet file contains all columns including image bytes — no external files are needed. Images are decoded as PIL Images automatically when accessing rows.
Citation
If you use this dataset, please cite:
@article{baldy2025,
title={HairPort: In-context 3D-Aware Hair Import and Transfer for Images},
year={2025}
}
License
This dataset is released under the CC BY 4.0 license.