image imagewidth (px) 19 48 | label class label 2
classes | source stringclasses 3
values | category stringclasses 0
values |
|---|---|---|---|
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null | |
1face | celeba | null |
Curated Face Detection Dataset
A drop-in dataset for training classical face detectors (Viola-Jones-style cascades, Haar-cascade classifiers, or any sliding-window pipeline that needs small grayscale face crops + a large pool of natural-image negatives).
The face crops come from four well-known source datasets, all preprocessed to grayscale (single channel) with consistent square cropping. The hard-negative source is Caltech-256 with face/people/human categories filtered out, kept as raw color JPGs so the user can extract negatives at any resolution.
Dataset structure
Three splits are available:
| Split | Rows | Contents |
|---|---|---|
train |
~68 k | CelebA 48×48 faces + FDDB 48×48 faces + CBCL 19×19 train faces/nofaces |
test |
~24 k | CBCL 19×19 test faces + nofaces (classic VJ benchmark) |
negatives |
~30 k | Caltech-256 filtered color JPGs (bootstrap negative pool) |
Each row has four columns:
| Column | Type | Description |
|---|---|---|
image |
Image |
decoded image (grayscale for CBCL/CelebA/FDDB, color for Caltech) |
label |
ClassLabel |
0 = noface, 1 = face |
source |
string |
"celeba" | "fddb" | "cbcl" | "caltech" |
category |
string | null |
Caltech category folder name (e.g. "001.ak47"); null for all other sources |
Local layout (development)
datasets/
├── README.md # This file (HF dataset card)
├── STATS.md # Per-source counts, mean/std, mosaics
├── celeba/train/ # 50,000 PNGs @ 48×48 grayscale + meta.json
├── fddb/train/ # 11,383 PNGs @ 48×48 grayscale + meta.json
├── cbcl/
│ ├── train/
│ │ ├── faces/ # 2,429 PNGs @ 19×19 grayscale + meta.json
│ │ └── nofaces/ # 4,548 PNGs @ 19×19 grayscale + meta.json
│ └── test/
│ ├── faces/ # 472 PNGs @ 19×19 grayscale + meta.json
│ └── nofaces/ # 23,573 PNGs @ 19×19 grayscale + meta.json
└── caltech/
├── README.md
└── source/ # 254 categories, ~29,900 color JPGs (~1.1 GB)
├── meta.json
└── NNN.<category>/*.jpg
Recommended use per source
Pick one face source for your training set; treat CBCL train as a historical reference, not a useful training set. CBCL test stays valuable as an academic benchmark for direct comparison with the literature.
| Source | Faces / Negatives | Recommended for | Notes |
|---|---|---|---|
| CelebA | 50,000 / — | Training (default) — large, easy to scale | Frontal headshots, very tight alignment, narrow pose distribution |
| FDDB | 11,383 / — | Training when pose variation matters | Merged train+valid; varied poses, lighting, occlusion |
| CBCL train | 2,429 / 4,548 | Not recommended for training | Too small and narrow, pre-cropped 19×19 |
| CBCL test | 472 / 23,573 | Test set for academic comparison | Standard VJ benchmark; hostile 1:50 imbalance |
| Caltech-256 | — / ~30k images | Negative pool (hard-neg mining) | Raw color JPGs, faces/people/humans filtered |
Crop conventions for face PNGs
FDDB and CelebA use a tight square crop with 10% margin; CBCL is copied verbatim from the MIT release.
- FDDB: bbox xywh from COCO annotations, filtered to
min(w, h) >= 24 px(smaller boxes upsample poorly to 48×48). Margin 10%, square crop centered on the bbox center, reflect-padded if it falls outside the source image. - CelebA aligned: face bbox computed from the 5-point landmarks. With the
eye and mouth midpoints,
face_height ≈ eye_to_mouth_distance × 2.63(anthropometric ratio: eye-to-mouth is ~0.38 of full face height). Center is shifted up from the eye-mouth midpoint by0.37 × eye_to_mouth. - CBCL: pre-cropped by MIT, no margin, no upsampling, native 19×19.
Each meta.json records the source raw image, original bbox/landmarks, and
extraction params, so any PNG is reproducible from the original raw datasets.
Caltech-256 raw
Three categories are filtered out because they contain human/face content that would leak positives into a "negatives" set:
253.faces-easy-101(frontal faces)159.people(whole-body people)112.human-skeleton(skulls/skeletal structure)
Filter rule: any directory whose name contains face, people, or human
(case-insensitive). The full kept/excluded category list lives in
caltech/source/meta.json.
The raw color JPGs are kept as-is — the user picks resolution and pool size when extracting hard-negative patches with their training pipeline.
How to use
With the datasets library (recommended)
from datasets import load_dataset
# All splits at once
ds = load_dataset("salvacarrion/face-detection")
# Individual splits
train = load_dataset("salvacarrion/face-detection", split="train")
test = load_dataset("salvacarrion/face-detection", split="test")
negatives = load_dataset("salvacarrion/face-detection", split="negatives")
# Filter by source
celeba = train.filter(lambda x: x["source"] == "celeba")
fddb = train.filter(lambda x: x["source"] == "fddb")
# Access an image (returns a PIL Image)
img = train[0]["image"]
With this training repo
After downloading the dataset, build NumPy bundles ready for cascade training:
# 1) Download
huggingface-cli download salvacarrion/face-detection \
--repo-type dataset --local-dir datasets
# 2) Build training-ready NPY bundles (one face source + Caltech negative pool)
python tools/prepare_data.py \
--face-source celeba \
--n-faces 10000 \
--resolution 24 \
--augment
# 3) Train your cascade
python main.py train --dataset-path datasets ...
Sources and licenses
This dataset is a derivative work combining four sources, each under their own terms. Use is restricted to non-commercial research.
| Source | Original URL | License |
|---|---|---|
| FDDB (Roboflow re-pack v1, 2022) | https://public.roboflow.ai/object-detection/undefined | CC BY 4.0 |
| CelebA (aligned) | http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html | Research-only, non-commercial |
| MIT CBCL Face Database #1 | http://cbcl.mit.edu/software-datasets/FaceData2.html | Research-only, non-commercial |
| Caltech-256 | https://data.caltech.edu/records/nyy15-4j048 | Research-only, non-commercial |
By using this dataset you agree to abide by the licenses of all four underlying sources. CelebA and CBCL explicitly prohibit redistribution; this re-pack is provided for research use only under that understanding. If you represent any of the original dataset providers and want this taken down, open an issue on the dataset repo.
Cite the original sources, not this re-pack, in academic work.
- Downloads last month
- 90