Improve dataset card: add metadata and links to paper/project/code
#2
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,15 +1,61 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
-
TAPFormer: Robust Arbitrary Point Tracking via Transient Asynchronous Fusion of Frames and Events (CVPR 2026)
|
| 5 |
|
| 6 |
-
|
| 7 |
|
| 8 |
-
|
| 9 |
|
| 10 |
-
|
| 11 |
|
| 12 |
-
|
| 13 |
|
|
|
|
|
|
|
|
|
|
| 14 |
|
|
|
|
|
|
|
| 15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- other
|
| 5 |
+
tags:
|
| 6 |
+
- computer-vision
|
| 7 |
+
- point-tracking
|
| 8 |
+
- event-camera
|
| 9 |
+
- multimodal
|
| 10 |
---
|
|
|
|
| 11 |
|
| 12 |
+
# TAPFormer: Robust Arbitrary Point Tracking via Transient Asynchronous Fusion of Frames and Events
|
| 13 |
|
| 14 |
+
[**Project Page**](https://tapformer.github.io/) | [**Paper**](https://huggingface.co/papers/2603.04989) | [**GitHub**](https://github.com/ljx1002/TAPFormer)
|
| 15 |
|
| 16 |
+
TAPFormer is a transformer-based framework for robust and high-frequency arbitrary point tracking (TAP). It introduces a Transient Asynchronous Fusion (TAF) mechanism to bridge the gap between low-rate RGB frames and high-rate event streams.
|
| 17 |
|
| 18 |
+
This repository hosts the **InivTAP** and **DrivTAP** benchmarks, which are real-world frame-event TAP datasets covering diverse illumination and motion conditions.
|
| 19 |
|
| 20 |
+
## Dataset Summary
|
| 21 |
+
- **InivTAP**: Indoor sequences with various objects and lighting conditions, including synchronized frame-event data.
|
| 22 |
+
- **DrivTAP**: Outdoor driving sequences captured under realistic conditions.
|
| 23 |
|
| 24 |
+
## Dataset Structure
|
| 25 |
+
To use these datasets with the TAPFormer codebase, ensure your data is organized as follows:
|
| 26 |
|
| 27 |
+
```
|
| 28 |
+
dataset_dir/
|
| 29 |
+
βββ InivTAP/
|
| 30 |
+
β βββ sequence_name/
|
| 31 |
+
β βββ events/
|
| 32 |
+
β βββ images_corrected/
|
| 33 |
+
β βββ annotations.npy
|
| 34 |
+
βββ DrivTAP/
|
| 35 |
+
βββ sequence_name/
|
| 36 |
+
βββ events/
|
| 37 |
+
βββ images_corrected/
|
| 38 |
+
βββ annotations.npy
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
## Citation
|
| 42 |
+
|
| 43 |
+
If you use this dataset or code in your research, please cite:
|
| 44 |
+
|
| 45 |
+
```bibtex
|
| 46 |
+
@article{liu2026tapformer,
|
| 47 |
+
title={TAPFormer: Robust Arbitrary Point Tracking via Transient Asynchronous Fusion of Frames and Events},
|
| 48 |
+
author={Liu, Jiaxiong and Tan, Zhen and Zhang, Jinpu and Zhou, Yi and Shen, Hui bit and Chen, Xieyuanli and Hu, Dewen},
|
| 49 |
+
journal={arXiv preprint arXiv:2603.04989},
|
| 50 |
+
year={2026}
|
| 51 |
+
}
|
| 52 |
+
|
| 53 |
+
@inproceedings{liu2025tracking,
|
| 54 |
+
title={Tracking any point with frame-event fusion network at high frame rate},
|
| 55 |
+
author={Liu, Jiaxiong and Wang, Bo and Tan, Zhen and Zhang, Jinpu and Shen, Hui and Hu, Dewen},
|
| 56 |
+
booktitle={2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
|
| 57 |
+
pages={18834--18840},
|
| 58 |
+
year={2025},
|
| 59 |
+
organization={IEEE}
|
| 60 |
+
}
|
| 61 |
+
```
|