Datasets:

Modalities:
Video
ArXiv:
License:
FreeTacMan / README.md
LongyanWu's picture
readme
3fcee7e
---
license: mit
task_categories:
- robotics
tags:
- tactile
---
# πŸ“¦ FreeTacman
## Robot-free Visuo-Tactile Data Collection System for Contact-rich Manipulation
## 🎯 Overview
This dataset supports the paper **[FreeTacman: Robot-free Visuo-Tactile Data Collection System for Contact-rich Manipulation](http://arxiv.org/abs/2506.01941)**. It contains a large-scale, high-precision visuo-tactile manipulation dataset with over 3000k visuo-tactile image pairs, more than 10k trajectories across 50 tasks.
![FreeTacMan System Overview](https://raw.githubusercontent.com/OpenDriveLab/opendrivelab.github.io/master/FreeTacMan/task/datasetweb.png)
Please refer to our πŸš€ [Website](http://opendrivelab.com/freetacman) | πŸ“„ [Paper](http://arxiv.org/abs/2506.01941) | πŸ’» [Code](https://github.com/OpenDriveLab/FreeTacMan) | πŸ› οΈ [Hardware Guide](https://docs.google.com/document/d/1Hhi2stn_goXUHdYi7461w10AJbzQDC0fdYaSxMdMVXM/edit?addon_store&tab=t.0#heading=h.rl14j3i7oz0t) | πŸ“Ί [Video](https://opendrivelab.github.io/FreeTacMan/landing/FreeTacMan_demo_video.mp4) | 🌐 [X](https://x.com/OpenDriveLab/status/1930234855729836112) for more details.
## πŸ”¬ Potential Applications
The FreeTacman dataset enables diverse research directions in visuo-tactile learning and manipulation:
- **System Reproduction**: For researchers interested in hardware implementation, you can reproduce FreeTacMan from scratch using our πŸ› οΈ [Hardware Guide](https://docs.google.com/document/d/1Hhi2stn_goXUHdYi7461w10AJbzQDC0fdYaSxMdMVXM/edit?addon_store&tab=t.0#heading=h.rl14j3i7oz0t) and πŸ’» [Code](https://github.com/OpenDriveLab/FreeTacMan).
- **Multimodal Imitation Learning**: Transfer to other LED-based tactile sensors (such as GelSight) for developing robust multimodal imitation learning frameworks.
- **Tactile-aware Grasping**: Utilize the dataset for pre-training tactile representation models and developing tactile-aware reasoning systems.
- **Simulation-to-Real Transfer**: Leverage the dynamic tactile interaction sequences to enhance tactile simulation fidelity, significantly reducing the sim2real gap.
## πŸ“‚ Dataset Structure
The dataset is organized into 50 task categories, each containing:
- **Video files**: Synchronized video recordings from the wrist-mounted and visuo-tactile cameras for each demonstration
- **Trajectory files**: Detailed tracking data for tool center point pose and gripper distance
## 🧾 Data Format
### Video Files
- **Format**: MP4
- **Views**: Wrist-mounted camera and visuo-tactile camera perspectives per demonstration
### Trajectory Files
Each trajectory file contains the following data columns:
#### Timestamp
- `timestamp` - Unix Timestamp
#### Tool Center Point (TCP) Data
- `TCP_pos_x`, `TCP_pos_y`, `TCP_pos_z` - TCP position
- `TCP_euler_x`, `TCP_euler_y`, `TCP_euler_z` - TCP orientation (euler angles)
- `quat_w`, `quat_x`, `quat_y`, `quat_z` - TCP orientation (quaternion representation)
#### Gripper Data
- `gripper_distance` - Gripper opening distance
## πŸ“ Citation
If you use this dataset in your research, please cite:
```bibtex
@article{wu2025freetacman,
title={Freetacman: Robot-free visuo-tactile data collection system for contact-rich manipulation},
author={Wu, Longyan and Yu, Checheng and Ren, Jieji and Chen, Li and Jiang, Yufei and Huang, Ran and Gu, Guoying and Li, Hongyang},
journal={arXiv preprint arXiv:2506.01941},
year={2025}
}
```
## πŸ’Ό License
This dataset is released under the MIT License. See LICENSE file for details.
## πŸ“§ Contact
For questions or issues regarding the dataset, please contact: Longyan Wu (im.longyanwu@gmail.com).