File size: 6,768 Bytes
7696fe4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fcd9340
 
 
 
7696fe4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67fff83
7696fe4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67fff83
7696fe4
67fff83
7696fe4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67fff83
7696fe4
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
---
license: mit
tags:
  - Embodied-AI
  - Robotic manipulation
  - Vision-Language-Action model
  - Teleoperation
  - Dexterous Hand
task_categories:
- robotics
language:
- en
size_categories:
- 1K<n<10K
preview: false
---

# VITRA Teleoperation Dataset

## Dataset Summary

This dataset contains real-world robot teleoperation demonstrations collected
using a 7-DoF robotic arm equipped with a dexterous hand and a head-mounted RGB
camera. Each episode provides synchronized **numerical state/action data**
and **video recordings**. The dataset is used for finetuning in the project [VITRA: Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos](https://arxiv.org/abs/2510.21571) 

**Project page:** [https://microsoft.github.io/VITRA/](https://microsoft.github.io/VITRA/)

---

## Hardware Setup

- **Robot Arm**: Realman Arm (7-DoF)  
  URDF: https://github.com/RealManRobot/rm_models/tree/main/RM75/urdf/RM75-6F
- **Dexterous Hand**: XHand (12-DoF)
- **Head Camera**: Intel RealSense D455

---

## Data Modalities and Files

Each episode consists of two synchronized files:

- `<episode_id>.h5` — numerical data including robot states, actions, kinematics,
  and metadata
- `<episode_id>.mp4` — RGB video stream recorded from the head-mounted camera

The two files correspond **one-to-one** and share the same episode identifier.

---

## Coordinate Frames

The dataset uses the following coordinate frames:

- **arm_base**  
  Root frame of the arm kinematic chain, defined in the URDF.
- **ee_urdf**  
  End-effector frame defined in the URDF (joint7).
- **hand_mount**  
  Rigid mounting frame of the dexterous hand, including flange offset.  
  This frame is rotationally aligned with the human hand axis illustrated in Figure 1 (identity rotation).
- **head_camera**  
  Optical center of the head-mounted RGB camera.

<p align="center">
  <img src="figure/hand_mount_frame.png" width="700"><br>
  <em> <b> Figure 1.</b> The <code>hand_mount</code> frame axes. Axis directions follow the human hand definition illustrated in the figure.</em>
</p>

---

## Arm Availability and Masks

The dataset format is compatible with both **right-arm-only** episodes and **dual-arm** episodes. The currently released dataset contains only right-arm data.

- Missing arms/hands are filled with zeros to keep array shapes consistent.
- Availability is indicated by:
  - `/meta/has_left`, `/meta/has_right` (episode-level)
  - `/mask/*` (frame-level)

---

## HDF5 File Structure

Each `.h5` file follows the structure below:
```
/
├── meta/
│   ├── instruction                     string
│   ├── video_path                      string
│   ├── frame_count                     int # T
│   ├── fps                             float
│   ├── has_left                        bool
│   ├── has_right                       bool

├── kinematics/
│   ├── left_ee_urdf_to_hand_mount      (4, 4) float64
│   ├── right_ee_urdf_to_hand_mount     (4, 4) float64
│   ├── head_camera_to_left_arm_base    (4, 4) float64
│   └── head_camera_to_right_arm_base   (4, 4) float64

├── observation/
│   └── camera/
│       └── intrinsics                   (3, 3) float64

├── state/
│   ├── left_arm_joint                   (T, Na) float64  # joint positions (rad)
│   ├── right_arm_joint                  (T, Na) float64
│   ├── left_hand_mount_pose             (T, 6)  float64  # hand_mount pose in arm_base: [x,y,z,rx,ry,rz]
│   ├── right_hand_mount_pose            (T, 6)  float64  # hand_mount pose in arm_base: [x,y,z,rx,ry,rz]
|   ├── left_hand_mount_pose_in_cam      (T, 6)  float64  # hand_mount pose in head_camera: [x,y,z,rx,ry,rz]
|   ├── right_hand_mount_pose_in_cam     (T, 6)  float64  # hand_mount pose in head_camera: [x,y,z,rx,ry,rz]
│   ├── left_hand_joint                  (T, Nh) float64
│   └── right_hand_joint                 (T, Nh) float64

├── action/
│   ├── left_arm_joint                   (T, Na) float64  # target joint positions (rad)
│   ├── right_arm_joint                  (T, Na) float64  # target joint positions (rad)
│   ├── left_hand_joint                  (T, Nh) float64  # target joint positions (rad)
│   └── right_hand_joint                 (T, Nh) float64  # target joint positions (rad)

└── mask/
    ├── left_arm                         (T,) bool
    ├── right_arm                        (T,) bool
    ├── left_hand                        (T,) bool
    └── right_hand                       (T,) bool
```

---

## Pose Representation

For all `*_hand_mount_pose` entries, poses are represented as:

```
[x, y, z, rx, ry, rz]
```

where:
- `(x, y, z)` denotes the position of the `hand_mount` frame expressed in
  `arm_base` (meters)
- `(rx, ry, rz)` denotes the rotation vector in axis–angle representation
  (radians)

---

## Transformation Notation

A homogeneous transformation matrix is denoted by `T` (4×4).

- **Subscript**: reference frame (the coordinate system used for expression)
- **Superscript**: target frame (the frame being described)

All subscripts and superscripts are written on the **right-hand side** of `T`.

Example: `T^{hand\_mount}_{arm\_base}` represents the pose of `hand_mount`
expressed in the `arm_base` frame.

---

## Kinematic Relations and Episode-Specific Transforms

Different flange hardware or camera mounting configurations may be used across
episodes or arms. As a result:

> **All kinematic and extrinsic transforms must be read from the current
> episode and must not be assumed constant.**

The hand mounting pose expressed in `arm_base` is computed as:

$$
T^{hand\_mount}_{arm\_base}
=
T^{ee\_urdf}_{arm\_base}
\cdot
T^{hand\_mount}_{ee\_urdf}
$$

where:

- `T^{ee\_urdf}_{arm\_base}` is obtained via forward kinematics (FK) from the arm
  joint positions, corresponding to the URDF end-effector frame (joint7).
- `T^{hand\_mount}_{ee\_urdf}` is a fixed, episode-specific transform provided under
  `/kinematics/*_ee_urdf_to_hand_mount`.

**Camera extrinsics may also vary across episodes.**  
Transforms under `/kinematics/head_camera_to_*_arm_base` should likewise be
read from the current episode and must not be assumed constant.
The hand mounting pose expressed in `head_camera` frame (i.e. `*_hand_mount_pose_in_cam`) is: 

$$
T^{hand\_mount}_{head\_camera} 
=
(T^{head\_camera}_{arm\_base})^{-1}
\cdot
T^{hand\_mount}_{arm\_base} 
$$

where:

- `T^{head\_camera}_{arm\_base}` is episode-specific transform provided under  `/kinematics/head_camera_to_*_arm_base`

---