README
📋 At a Glance
Teleoperated demonstrations with a da Vinci robot performing the "needle threading" surgical practice task.
📖 Dataset Overview
This dataset contains trajectories of novices using the dVRK to perform the "needle threading" surgical practice task. It includes successful trials, failures, and recovery attempts as well. One episode is defined as the threading of a string through the loop of an eye bolt.
| Total Trajectories | 204 |
| Total Hours | 00:56:47 |
| Data Type | [ ] Clinical [ ] Ex-Vivo [X] Table-Top Phantom [ ] Digital Simulation [ ] Physical Simulation [ ] Other (If checked, update "Other") |
| License | CC BY 4.0 |
| Version | [1.0] |
🎯 Tasks & Domain
Domain
- Surgical Robotics
- Ultrasound Robotics
- Other Healthcare Robotics
Demonstrated Skills
- String threading
🔬 Data Collection Details
Collection Method
- Human Teleoperation
- Programmatic/State-Machine
- AI Policy / Autonomous
- Other (Please specify)
Operator Details
| Description | |
|---|---|
| Operator Count | 2 |
| Operator Skill Level | [ ] Expert (e.g., Surgeon, Sonographer) [X] Intermediate (e.g., Trained Researcher) [X] Novice (e.g., ML Researcher with minimal experience) [ ] N/A |
| Collection Period | From [2026-02-10] to [2026-02-11] |
Recovery Demonstrations
- Yes
- No
If yes, please briefly describe the recovery process:
- The string misses the eye of the bolt on the 1st attempt
- String is not grasped on 1st attempt during handover
Failure Demonstrations
- Yes
- No
If yes, please briefly describe the failures:
- Tool collision
- Pushing the board by hitting the bolts with a tool
- Repeatedly missing the eye of the bolt
💡 Diversity Dimensions
- Camera Position / Angle
- Lighting Conditions
- Target Object (e.g., different phantom models, suture types)
- Spatial Layout (Varying starting positions and board placement )
- Robot Embodiment (if multiple robots were used)
- Task Execution (e.g., different techniques for the same task)
- Background / Scene (different colors in the background)
- Other (Please specify:
[Setup joints])
If you checked any of the above please briefly elaborate below.
- Endoscope lighting was changed throughout the trials
- Camera positon was varied
- Set up joint configuration was varied
- The board and the string position at start were varied
- Background was varied
🛠️ Equipment & Setup
Robotic Platform(s)
- Robot 1: da Vinci Classic (with da Vinci Research Kit)
Sensors & Cameras
| Type | Model/Details |
|---|---|
| Primary Camera | Stereo Endoscopic Camera, 720x576 @ 30fps |
| Wrist Camera | Endoscopic Camera (x2), 640x480 @ 30fps |
| Realsense Camera | Realsense RGB + D Camera, 1280x720 @ 30fps |
🎯 Action & State Space Representation
Action Space Representation
Primary Action Representation:
- Absolute Cartesian (position/orientation relative to robot base)
- Relative Cartesian (delta position/orientation from current pose)
- Joint Space (direct joint angle commands)
- Other (Please specify:
[Your Representation])
Orientation Representation:
- Quaternions (x, y, z, w)
- Euler Angles (roll, pitch, yaw)
- Axis-Angle (rotation vector)
- Rotation Matrix (3x3 matrix)
- Other (Please specify:
[Your Representation])
Reference Frame:
- Robot Base Frame
- Tool/End-Effector Frame
- World/Global Frame
- Camera Frame
- Other (Please specify:
[Your Frame])
Action Dimensions:
action: [x, y, z, qx, qy, qz, qw, gripper]
- x, y, z: Absolute position in robot base frame (meters)
- qx, qy, qz, qw: Absolute orientation as quaternion
- gripper: Gripper opening angle (radians)
State Space Representation
State Information Included:
- Joint Positions (all articulated joints)
- Joint Velocities
- End-Effector Pose (Cartesian position/orientation)
- Force/Torque Readings
- Gripper State (position, force, etc.)
- Other (Please specify:
[Set Up Joints Configuration], [Camera Pose])
State Dimensions:
observation.state: [x, y, z, qx, qy, qz, qw, gripper]
- x, y, z: Absolute position in robot base frame (meters)
- qx, qy, qz, qw: Absolute orientation as quaternion
- gripper: Gripper opening angle (radians)
📋 Additional Representations
⏱️ Data Synchronization Approach
Each modality (DeckLink cameras, USB cameras, and robotic kinematics) was recorded time-stamped on the same PC. A post-processing synchronization script segmented trials using explicit start/end markers and used the kinematic time series as the reference timeline. For each kinematic timestamp, the temporally nearest image frame from each camera stream was selected. The original stereoendoscope of the system is capable of only ~20FPS, thus the <50ms delay can not always be ensured.
👥 Attribution & Contact
| Dataset Lead | [Kristóf Takács, Eszter Lukács, Tamás Haidegger] |
| Institution | [Obuda University] |
| Contact Email | [krsitof.takacs@irob.uni-obuda.hu, eszter.lukacs@irob.uni-obuda.hu, haidegger@irob.uni-obuda.hu] |
| Citation (BibTeX) |