Update README.md
Browse files
README.md
CHANGED
|
@@ -7,4 +7,74 @@ language:
|
|
| 7 |
size_categories:
|
| 8 |
- 1K<n<10K
|
| 9 |
pretty_name: Haptic data
|
| 10 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
size_categories:
|
| 8 |
- 1K<n<10K
|
| 9 |
pretty_name: Haptic data
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
Full codes can be found in : https://github.com/LeMei/HapticCap
|
| 14 |
+
|
| 15 |
+
# 📌 HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals
|
| 16 |
+
|
| 17 |
+
Arxiv: https://arxiv.org/pdf/2507.13318? (Findings of EMNLP2025)
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
## 📖 Introduction
|
| 22 |
+
**HapticCap** is a multimodal dataset and benchmark task designed for understanding **user experience of vibration-based haptic signals**.
|
| 23 |
+
It provides a new resource for research at the intersection of **haptics, text, and multimodal learning**.
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
## 📂 Dataset
|
| 28 |
+
- **Modality**: Vibration haptic signals, paired with textual annotations
|
| 29 |
+
- **Textual Annotations**:
|
| 30 |
+
- Sensory: It refers to physical attributes (e.g.,intensity of tapping).
|
| 31 |
+
- Emotional: It refers to emotional denotes affective impressions (e.g., the mood of a scene).
|
| 32 |
+
- Associative: It indicates real-world familiar experiences (e.g., buzzing of a bee, a heartbeat).
|
| 33 |
+
|
| 34 |
+
- **Format**: Signals stored as time-series data; annotations in JSON with haptic signal ID
|
| 35 |
+
- **Scale**:
|
| 36 |
+
|
| 37 |
+
<p align="center">
|
| 38 |
+
<img width="304" height="334" alt="image" src="https://github.com/user-attachments/assets/9debe339-c83e-4e3d-b173-99d004b0d1b6" />
|
| 39 |
+
</p>
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
Google drive:
|
| 44 |
+
|
| 45 |
+
Haptic Vibration Signals: <https://drive.google.com/drive/folders/1xylMC-EFswTc3adcc6rAzyFsXLSmVweg?usp=drive_link>
|
| 46 |
+
|
| 47 |
+
Human Descriptions: https: <https://drive.google.com/drive/folders/1ovlIbfJecXAq0TbItmrRl5dVV7OCCQzB?usp=drive_link>
|
| 48 |
+
|
| 49 |
+
or find the data in: <https://huggingface.co/datasets/GuiminHu/HapticCap>
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
## 🧩 Tasks
|
| 53 |
+
- Haptic-caption retrieval: Its objective is to retrieve the textual descriptions of three categories that correspond to a given haptic signal, using the
|
| 54 |
+
haptic signal as the query and the descriptions as the target documents.
|
| 55 |
+
|
| 56 |
+
- Training, Valid Test set:
|
| 57 |
+
|
| 58 |
+
<https://drive.google.com/drive/folders/1PfM2fjIHFDx1PtWADJHwo3TM2SRbp2tL?usp=drive_link>
|
| 59 |
+
|
| 60 |
+
## 🧩 Models
|
| 61 |
+
We design supervised contrastive learning framework that aims to pull the clusters of points belonging to the same class together in an embedding space and simultaneously
|
| 62 |
+
pushes apart clusters of samples from different classes.
|
| 63 |
+
|
| 64 |
+
<p align="center">
|
| 65 |
+
<img width="610" height="244" alt="image" src="https://github.com/user-attachments/assets/781be7dc-3674-401d-8d41-86f07ab1b205" />
|
| 66 |
+
</p>
|
| 67 |
+
|
| 68 |
+
---
|
| 69 |
+
|
| 70 |
+
## 🚀 Citation
|
| 71 |
+
If you find this dataset useful for your research, please cite our paper:
|
| 72 |
+
|
| 73 |
+
```bibtex
|
| 74 |
+
@article{hu2025hapticcap,
|
| 75 |
+
title={Hapticcap: A multimodal dataset and task for understanding user experience of vibration haptic signals},
|
| 76 |
+
author={Hu, Guimin and Hershcovich, Daniel and Seifi, Hasti},
|
| 77 |
+
journal={arXiv preprint arXiv:2507.13318},
|
| 78 |
+
year={2025}
|
| 79 |
+
}
|
| 80 |
+
```
|