Update README.md
Browse files
README.md
CHANGED
|
@@ -7,8 +7,19 @@ tags:
|
|
| 7 |
|
| 8 |
# TAPNet
|
| 9 |
|
| 10 |
-
This repository contains the
|
| 11 |
|
| 12 |
-
Code: https://github.com/google-deepmind/tapnet
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
|
| 8 |
# TAPNet
|
| 9 |
|
| 10 |
+
This repository contains the checkpoints of several point tracking models developed by DeepMind for point tracking.
|
| 11 |
|
| 12 |
+
π **Code**: [https://github.com/google-deepmind/tapnet](https://github.com/google-deepmind/tapnet)
|
| 13 |
|
| 14 |
+
## Included Models
|
| 15 |
+
|
| 16 |
+
- **TAPIR** (*Tracking Any Point with Implicit Representations*) β A fast and accurate point tracker for continuous point trajectories in space-time.
|
| 17 |
+
π **Project page**: [https://deepmind-tapir.github.io/](https://deepmind-tapir.github.io/)
|
| 18 |
+
|
| 19 |
+
- **BootsTAPIR** β A bootstrapped variant of TAPIR that improves robustness and stability across long videos via self-supervised refinement.
|
| 20 |
+
π **Project page**: [https://bootstap.github.io/](https://bootstap.github.io/)
|
| 21 |
+
|
| 22 |
+
- **TAPNext** β A new generative approach that frames point tracking as next-token prediction, enabling semi-dense, accurate, and temporally coherent tracking across challenging videos, including those presented in the paper [**TAPNext: Tracking Any Point (TAP) as Next Token Prediction**](https://huggingface.co/papers/2504.05579).
|
| 23 |
+
π **Project page**: [https://tap-next.github.io/](https://tap-next.github.io/)
|
| 24 |
+
|
| 25 |
+
These models provide state-of-the-art performance for tracking arbitrary points in videos and support research and applications in robotics, perception, and video generation.
|