Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found d405-hand-surface-depth.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1167, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found d405-hand-surface-depth.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

D405 Hand-Surface Depth Dataset

Real-Time Multimodal Fingertip Contact Detection via Depth and Motion Fusion for Vision-Based Human-Computer Interaction

CVPR 2026

Mukhiddin Toshpulatov1,2,4,5 · Wookey Lee2 · Suan Lee3 · Geehyuk Lee1

1SpaceTop, SoC, KAIST    2VoiceAI, BMSE, Inha University    3SoCS, Semyung University    4Dep. of CE, Gachon University, South Korea    5Jizzakh branch of the National University of Uzbekistan

Paper GitHub Project Page

Dataset Description

A multi-user, multi-angle RGB-depth dataset captured with an Intel RealSense D405 stereo depth camera for close-range hand-surface interaction research. The dataset supports two tasks:

  1. Metric depth estimation fine-tuning — close-range (7–50 cm) depth supervision for monocular models
  2. Fingertip contact detection — per-fingertip contact/hover labels derived from calibrated surface plane geometry

This dataset was used to fine-tune Depth Anything V2 ViT-S, reducing depth MAE by 68%, from 12.3 mm → 3.84 mm at 25–45 cm operating range, and to train and evaluate a velocity-gated hysteresis contact detector achieving 94.2% accuracy and 94.4% F1-score.

Dataset Summary

Property Value
Total RGB-depth pairs 53,300
Participants 15 users
Camera angles 30°, 45°, 60°, 90°
Surfaces White desk (primary)
Resolution 640 × 480 pixels
Frame rate 30 FPS
Depth sensor Intel RealSense D405 (active stereo)
Depth range 70–500 mm
Depth accuracy < 0.5 mm at 350 mm
Per-frame annotations 21 MediaPipe hand landmarks + 5 fingertip metric depths
Contact labels Per-fingertip binary contact/hover state

Splits

All frames from a given participant belong to exactly one split (participant-stratified), preventing identity leakage from hand shape, skin tone, or typing style.

Split Frames Participants
Train 42,640 (80%) P01, P03, P04, P06, P07, P09–P13, P18 + supplementary
Validation 5,330 (10%) P05, P16 + supplementary
Test 5,330 (10%) P08, P17 + supplementary
Total 53,300 15 participants (8:1:1 ratio)

Data Structure

d405-hand-surface-depth/
├── train/
│   ├── P01/
│   │   ├── P01_angle90_white_desk_typing_074155/
│   │   │   ├── rgb/
│   │   │   │   ├── 000000.png        # BGR uint8, 640×480
│   │   │   │   ├── 000001.png
│   │   │   │   └── ...
│   │   │   ├── depth/
│   │   │   │   ├── 000000.png        # uint16, millimeters
│   │   │   │   ├── 000001.png
│   │   │   │   └── ...
│   │   │   ├── annotations/
│   │   │   │   ├── 000000.json       # Hand landmarks + fingertip depths
│   │   │   │   └── ...
│   │   │   └── metadata.json         # Session & camera info
│   │   ├── P01_angle45_white_desk_typing_094354/
│   │   │   └── ...
│   │   └── ...
│   ├── P03/
│   │   └── ...
│   └── ...
├── val/
│   ├── P13/
│   │   └── ...
│   └── P16/
│       └── ...
├── test/
│   ├── P05/
│   │   └── ...
│   └── P08/
│       └── ...
└── splits/
    ├── train.txt
    ├── val.txt
    └── test.txt

File Formats

RGB Images (rgb/*.png)

Standard BGR uint8 images (640 × 480). Compatible with OpenCV cv2.imread().

Depth Maps (depth/*.png)

16-bit unsigned integer PNGs where each pixel value represents depth in millimeters. A value of 0 indicates invalid/missing depth.

import cv2
import numpy as np

depth_mm = cv2.imread("depth/000000.png", cv2.IMREAD_UNCHANGED)  # uint16
depth_m = depth_mm.astype(np.float32) / 1000.0                    # convert to meters

Annotations (annotations/*.json)

Per-frame hand tracking results from MediaPipe:

{
  "frame_id": 17,
  "num_hands": 2,
  "hands": [
    {
      "handedness": "Right",
      "fingertip_depths_m": {
        "thumb": 0.352, "index": 0.348,
        "middle": 0.324, "ring": 0.350, "pinky": 0.357
      },
      "fingertip_pixels": {
        "thumb": [294, 326], "index": [303, 241],
        "middle": [338, 218], "ring": [368, 224], "pinky": [402, 248]
      },
      "landmarks_px": [[180, 426], [231, 418], "... (21 landmarks)"]
    }
  ]
}

Session Metadata (metadata.json)

{
  "camera": "Intel RealSense D405",
  "resolution": [640, 480],
  "fps": 30,
  "depth_scale_m": 0.0001,
  "depth_range_m": [0.07, 0.5],
  "intrinsics": {
    "fx": 434.16, "fy": 432.89,
    "cx": 323.11, "cy": 239.79,
    "width": 640, "height": 480
  },
  "session": {
    "participant": "P01",
    "angle_degrees": 90,
    "surface": "white_desk",
    "action": "typing"
  },
  "surface_calibration": {
    "plane_coefficients": [-1.24e-05, 1.63e-05, 1.0, -0.3647],
    "inlier_ratio": 0.77,
    "avg_surface_depth_m": 0.363
  }
}

Camera Setup

         [90° overhead]
              |
              |  35 cm
              |
    ──────────┼──────────  ← desk surface
   /          |          \
  / 60°     45°      30° \
 /            |            \
[cam]       [cam]        [cam]

The Intel RealSense D405 is mounted on a tripod at 25–45 cm from the desk surface. Each participant is recorded at multiple angles while typing on a printed QWERTY keyboard layout.

D405 Depth Processing Pipeline

Filter Parameter Value
Spatial magnitude 2
Spatial alpha 0.5
Spatial delta 20
Temporal alpha 0.4
Temporal delta 20
Hole filling mode 1 (farthest from around)

Contact Labels

Binary contact/hover labels are derived automatically from depth measurements using a velocity-gated hysteresis state machine:

Transition Threshold
Hover → Contact (entry) fingertip-to-surface ≤ 4.5 mm
Contact → Hover (exit) fingertip-to-surface ≥ 6.0 mm
Cooldown 450 ms (~15 frames at 30 FPS)

Surface depth at each pixel is computed from RANSAC plane coefficients fitted during calibration (empty desk, 30 frames).

Intended Use

Primary Use Cases

  • Monocular metric depth estimation — fine-tuning depth foundation models (e.g., Depth Anything V2, ZoeDepth, Metric3D) for close-range hand-surface scenarios
  • Fingertip contact detection — training and evaluating touch/hover classifiers for vision-based virtual keyboards and touchless interfaces
  • Hand-surface interaction research — studying hand pose, depth, and contact dynamics across multiple viewpoints

Out-of-Scope Use

  • General-purpose depth estimation (this dataset is specialized for 7–50 cm range)
  • Biometric identification from hand shape or skin tone
  • Any application that requires individual participant re-identification

Results Using This Dataset

Depth Estimation (fine-tuned Depth Anything V2 ViT-S)

Metric Pre-trained Fine-tuned
MAE (mm) 12.3 3.84
δ1 (%) 87.2 95.96
RMSE (mm) 18.4 4.8
abs_rel 0.042 0.008

Contact Detection

Method Accuracy (%) F1 (%) FPR (%)
Depth threshold only 87.3 86.1 8.7
Velocity only 89.1 88.5 6.3
Depth + velocity fusion (ours) 94.2 94.4 4.2

Ethical Considerations

  • All participants provided informed consent for data collection and public release
  • No personally identifiable information (face, name) is included — only hand images
  • The dataset should not be used for biometric identification or surveillance purposes
  • Participant IDs (P01–P18) are anonymized

Citation

@inproceedings{toshpulatov2026realtime,
  title={Real-Time Multimodal Fingertip Contact Detection via Depth and Motion
         Fusion for Vision-Based Human-Computer Interaction},
  author={Toshpulatov, Mukhiddin and Lee, Wookey and Lee, Suan and Lee, Geehyuk},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision
             and Pattern Recognition (CVPR)},
  year={2026}
}

License

This dataset is released under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).

Acknowledgements

Downloads last month
6