Raffael-Kultyshev commited on
Commit
aa91595
·
verified ·
1 Parent(s): 67a45bd

Update README with hybrid occlusion-aware pose estimation details

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -23,7 +23,13 @@ pretty_name: Dynamic Intelligence - Egocentric Human Motion Annotation Dataset
23
 
24
  RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for humanoid robot training. Includes 6-DoF hand pose trajectories, synchronized video, and semantic motion annotations.
25
 
26
- **Data Integrity:** When hand tracking fails (out-of-frame, occlusion), states hold their last known value (frozen). Downstream models can detect this by checking for identical consecutive frames (`frame[i] == frame[i-1]`). Euler angles are unwrapped for continuous visualization without ±180° discontinuities.
 
 
 
 
 
 
27
 
28
  ---
29
 
 
23
 
24
  RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for humanoid robot training. Includes 6-DoF hand pose trajectories, synchronized video, and semantic motion annotations.
25
 
26
+ **Data Integrity & Occlusion Handling:**
27
+ - **Visible joints:** Direct depth measurement from TrueDepth sensor (<3mm accuracy)
28
+ - **Self-occluded joints:** Estimated via scaled MediaPipe world landmarks anchored to visible wrist position (~5-8mm accuracy)
29
+ - **Euler angles:** Unwrapped for continuous visualization without ±180° discontinuities
30
+ - **Confidence scores:** Each joint has a confidence value (1.0 = depth-based, 0.7 = estimated)
31
+
32
+ This hybrid approach ensures no "fake" depth values are used when joints are behind the palm during grasping movements.
33
 
34
  ---
35