|
|
--- |
|
|
license: apache-2.0 |
|
|
--- |
|
|
# [APTv2 Dataset](https://github.com/ViTAE-Transformer/APTv2) |
|
|
|
|
|
**APTv2** is a large-scale benchmark for **animal pose estimation and tracking** across 30 species. |
|
|
It provides high-quality **keypoint** and **tracking annotations** for 84,611 animal instances spanning **2,749 video clips** (41,235 frames total). |
|
|
|
|
|
### 📦 Dataset Overview |
|
|
|
|
|
* **Total videos:** 2,749 |
|
|
* **Frames per clip:** 15 |
|
|
* **Total frames:** 41,235 |
|
|
* **Annotated instances:** 84,611 |
|
|
* **Species:** 30 |
|
|
* **Tracks:** |
|
|
|
|
|
1. Single-frame pose estimation |
|
|
2. Low-data generalization |
|
|
3. Pose tracking |
|
|
|
|
|
### 🧠 Citation |
|
|
|
|
|
If you use this dataset, please cite: |
|
|
|
|
|
```bibtex |
|
|
@misc{yang2023aptv2, |
|
|
title={APTv2: Benchmarking Animal Pose Estimation and Tracking with a Large-scale Dataset and Beyond}, |
|
|
author={Yuxiang Yang and Yingqi Deng and Yufei Xu and Jing Zhang}, |
|
|
year={2023}, |
|
|
eprint={2312.15612}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV} |
|
|
} |
|
|
``` |
|
|
|
|
|
### 📚 Reference |
|
|
|
|
|
Original paper: [APTv2 on arXiv](https://arxiv.org/abs/2312.15612) |
|
|
|
|
|
Code: [Github](https://github.com/ViTAE-Transformer/APTv2) |
|
|
|