Datasets:
Update README with hybrid occlusion-aware pose estimation details
Browse files
README.md
CHANGED
|
@@ -23,7 +23,13 @@ pretty_name: Dynamic Intelligence - Egocentric Human Motion Annotation Dataset
|
|
| 23 |
|
| 24 |
RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for humanoid robot training. Includes 6-DoF hand pose trajectories, synchronized video, and semantic motion annotations.
|
| 25 |
|
| 26 |
-
**Data Integrity
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
---
|
| 29 |
|
|
|
|
| 23 |
|
| 24 |
RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for humanoid robot training. Includes 6-DoF hand pose trajectories, synchronized video, and semantic motion annotations.
|
| 25 |
|
| 26 |
+
**Data Integrity & Occlusion Handling:**
|
| 27 |
+
- **Visible joints:** Direct depth measurement from TrueDepth sensor (<3mm accuracy)
|
| 28 |
+
- **Self-occluded joints:** Estimated via scaled MediaPipe world landmarks anchored to visible wrist position (~5-8mm accuracy)
|
| 29 |
+
- **Euler angles:** Unwrapped for continuous visualization without ±180° discontinuities
|
| 30 |
+
- **Confidence scores:** Each joint has a confidence value (1.0 = depth-based, 0.7 = estimated)
|
| 31 |
+
|
| 32 |
+
This hybrid approach ensures no "fake" depth values are used when joints are behind the palm during grasping movements.
|
| 33 |
|
| 34 |
---
|
| 35 |
|