Datasets:
Languages:
English
Size:
10K<n<100K
Tags:
depth-estimation
hand-tracking
contact-detection
virtual-keyboard
human-computer-interaction
intel-realsense
License:
Add User0 supplementary data (1,513 frames), migrate to modern metadata format for dataset viewer
3eb8d97 verified metadata
license: cc-by-nc-4.0
task_categories:
- depth-estimation
- image-classification
tags:
- depth-estimation
- hand-tracking
- contact-detection
- virtual-keyboard
- human-computer-interaction
- intel-realsense
- d405
- close-range
- metric-depth
- cvpr2026
language:
- en
pretty_name: D405 Hand-Surface Depth Dataset
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: splits/train.jsonl
- split: validation
path: splits/val.jsonl
- split: test
path: splits/test.jsonl
dataset_info:
features:
- name: rgb
dtype: image
- name: depth
dtype: image
- name: participant
dtype: string
- name: session
dtype: string
- name: frame_id
dtype: string
- name: angle
dtype: int32
- name: surface
dtype: string
- name: has_contact
dtype: bool
- name: handedness
dtype: string
splits:
- name: train
num_examples: 42954
- name: validation
num_examples: 5040
- name: test
num_examples: 5306
D405 Hand-Surface Depth Dataset
A multi-user, multi-angle RGB-depth dataset for close-range hand-surface interaction research, captured with an Intel RealSense D405 stereo depth camera.
Paper: Real-Time Multimodal Fingertip Contact Detection via Depth and Motion Fusion for Vision-Based Human-Computer Interaction (CVPR 2026)
Authors: Mukhiddin Toshpulatov, Wookey Lee, Suan Lee, Geehyuk Lee
Key Statistics
| Metric | Value |
|---|---|
| Total RGB-depth pairs | 53,300 |
| Participants | 15 (anonymized P01-P18) |
| Camera angles | 30°, 45°, 60°, 90° |
| Resolution | 640 × 480 |
| Frame rate | 30 FPS |
| Depth range | 70-500 mm |
| Depth format | uint16 PNG (millimeters) |
| RGB format | BGR uint8 PNG |
Splits (Participant-Stratified)
| Split | Frames | Participants |
|---|---|---|
| Train | 42,954 | P01, P03, P04, P06, P07, P09-P13, P18 |
| Validation | 5,040 | P05, P16 |
| Test | 5,306 | P08, P17 |
Participant-stratified splits prevent identity leakage across partitions.
Usage
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("muxiddin19/d405-hand-surface-depth")
# Access splits
train = dataset["train"]
val = dataset["validation"]
test = dataset["test"]
# Example: iterate over training samples
for sample in train:
rgb_image = sample["rgb"] # PIL Image (640x480 RGB)
depth_image = sample["depth"] # PIL Image (640x480 uint16, values in mm)
participant = sample["participant"]
angle = sample["angle"]
has_contact = sample["has_contact"]
print(f"{participant}, {angle}°, contact={has_contact}")
break
Data Structure
Each sample contains:
- rgb: RGB image (640×480, BGR uint8 PNG)
- depth: Depth map (640×480, uint16 PNG, values in millimeters, 0 = invalid)
- participant: Anonymized participant ID (P01-P18)
- session: Recording session identifier
- frame_id: Frame number within session
- angle: Camera viewing angle in degrees (30, 45, 60, or 90)
- surface: Surface type (white_desk, wood_grain, semi_reflective)
- has_contact: Whether any fingertip is in contact state
- handedness: Detected hand (Left/Right)
Additional Annotations (in repo)
Per-frame JSON annotations in data/<participant>/<session>/annotations/ contain:
- 21 MediaPipe hand landmarks (pixel coordinates)
- 5 fingertip metric depths
- Per-fingertip binary contact/hover labels
Per-session metadata.json files contain:
- Camera intrinsics (fx, fy, cx, cy)
- Depth scale and processing filter parameters
- RANSAC surface plane coefficients
Recording Setup
- Sensor: Intel RealSense D405 (optimal range 7-50 cm, <0.5 mm accuracy at 35 cm)
- Mounting: 35 cm above desk at 45° primary angle
- Lighting: Diffuse LED, 5000K, 800 lux
- Surfaces: White desk (primary), wood grain, semi-reflective laminate
- Additional cameras: RGB cameras at 30° and 60° for multi-angle data
Benchmark Results
Fine-tuning Depth Anything V2 ViT-S on this dataset:
| Metric | Pre-trained | Fine-tuned | Improvement |
|---|---|---|---|
| MAE (mm) | 12.3 | 3.84 | 68% reduction |
| δ₁ (%) | 87.2 | 95.96 | +8.8 pp |
| RMSE (mm) | 18.4 | 4.8 | 74% reduction |
Downstream contact detection: 94.2% accuracy, 94.4% F1-score
Ethics
- All participants gave informed consent
- No PII (face/name) included — only hand images with anonymous IDs
- Not intended for biometric identification or surveillance
Links
- GitHub: Multimodal-Fingertip-Contact-Detection
- Model: vrkeyb-cvpr2026
- Project Page: Project Website
Citation
@inproceedings{toshpulatov2026realtime,
title={Real-Time Multimodal Fingertip Contact Detection via Depth and
Motion Fusion for Vision-Based Human-Computer Interaction},
author={Toshpulatov, Mukhiddin and Lee, Wookey and Lee, Suan and Lee, Geehyuk},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition (CVPR)},
year={2026}
}