Datasets:

Modalities:
Video
ArXiv:
License:
LongyanWu commited on
Commit
f99ab3e
Β·
1 Parent(s): 4c1d9d3
Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -16,6 +16,17 @@ This dataset supports the paper **[FreeTacman: Robot-free Visuo-Tactile Data Col
16
  ![FreeTacMan System Overview](https://raw.githubusercontent.com/OpenDriveLab/opendrivelab.github.io/master/FreeTacMan/task/datasetweb.png)
17
  Please refer to our πŸš€ [Website](http://opendrivelab.com/freetacman) | πŸ“„ [Paper](http://arxiv.org/abs/2506.01941) | πŸ’» [Code](https://github.com/OpenDriveLab/FreeTacMan) | πŸ› οΈ [Hardware Guide](https://docs.google.com/document/d/1Hhi2stn_goXUHdYi7461w10AJbzQDC0fdYaSxMdMVXM/edit?addon_store&tab=t.0#heading=h.rl14j3i7oz0t) | πŸ“Ί [Video](https://opendrivelab.github.io/FreeTacMan/landing/FreeTacMan_demo_video.mp4) | 🌐 [X](https://x.com/OpenDriveLab/status/1930234855729836112) for more details.
18
 
 
 
 
 
 
 
 
 
 
 
 
19
  ## πŸ“‚ Dataset Structure
20
 
21
  The dataset is organized into 50 task categories, each containing:
 
16
  ![FreeTacMan System Overview](https://raw.githubusercontent.com/OpenDriveLab/opendrivelab.github.io/master/FreeTacMan/task/datasetweb.png)
17
  Please refer to our πŸš€ [Website](http://opendrivelab.com/freetacman) | πŸ“„ [Paper](http://arxiv.org/abs/2506.01941) | πŸ’» [Code](https://github.com/OpenDriveLab/FreeTacMan) | πŸ› οΈ [Hardware Guide](https://docs.google.com/document/d/1Hhi2stn_goXUHdYi7461w10AJbzQDC0fdYaSxMdMVXM/edit?addon_store&tab=t.0#heading=h.rl14j3i7oz0t) | πŸ“Ί [Video](https://opendrivelab.github.io/FreeTacMan/landing/FreeTacMan_demo_video.mp4) | 🌐 [X](https://x.com/OpenDriveLab/status/1930234855729836112) for more details.
18
 
19
+ ## πŸ”¬ Potential Applications
20
+ The FreeTacman dataset enables diverse research directions in visuo-tactile learning and manipulation:
21
+
22
+ **System Reproduction**: For researchers interested in hardware implementation, you can reproduce FreeTacMan from scratch using our πŸ› οΈ [Hardware Guide](https://docs.google.com/document/d/1Hhi2stn_goXUHdYi7461w10AJbzQDC0fdYaSxMdMVXM/edit?addon_store&tab=t.0#heading=h.rl14j3i7oz0t) and πŸ’» [Code](https://github.com/OpenDriveLab/FreeTacMan).
23
+
24
+ **Multimodal Imitation Learning**: Transfer to other LED-based tactile sensors (such as GelSight) for developing robust multimodal imitation learning frameworks.
25
+
26
+ **Tactile-aware Grasping**: Utilize the dataset for pre-training tactile representation models and developing tactile-aware reasoning systems.
27
+
28
+ **Simulation-to-Real Transfer**: Leverage the dynamic tactile interaction sequences to enhance tactile simulation fidelity, significantly reducing the sim2real gap.
29
+
30
  ## πŸ“‚ Dataset Structure
31
 
32
  The dataset is organized into 50 task categories, each containing: