Upload README.md
Browse files
README.md
CHANGED
|
@@ -3,7 +3,7 @@ license: mit
|
|
| 3 |
viewer: false
|
| 4 |
---
|
| 5 |
|
| 6 |
-
#
|
| 7 |
|
| 8 |
TacQuad is an aligned multi-modal multi-sensor tactile dataset collected from 4 types of visuo-tactile sensors (GelSight Mini, DIGIT, DuraGel and Tac3D). It offers a more comprehensive solution to the low standardization of visuo-tactile sensors by providing multi-sensor aligned data with text and visual images. This explicitly enables models to learn semantic-level tactile attributes and sensor-agnostic features to form a unified multi-sensor representation space through data-driven approaches. This dataset includes two subsets of paired data with different levels of alignment:
|
| 9 |
|
|
|
|
| 3 |
viewer: false
|
| 4 |
---
|
| 5 |
|
| 6 |
+
# TacQuad: Aligned Multi-Modal Multi-Sensor Tactile Dataset
|
| 7 |
|
| 8 |
TacQuad is an aligned multi-modal multi-sensor tactile dataset collected from 4 types of visuo-tactile sensors (GelSight Mini, DIGIT, DuraGel and Tac3D). It offers a more comprehensive solution to the low standardization of visuo-tactile sensors by providing multi-sensor aligned data with text and visual images. This explicitly enables models to learn semantic-level tactile attributes and sensor-agnostic features to form a unified multi-sensor representation space through data-driven approaches. This dataset includes two subsets of paired data with different levels of alignment:
|
| 9 |
|