--- license: apache-2.0 library_name: acaua pipeline_tag: keypoint-detection tags: - pose-estimation - keypoint-detection - vision - acaua - native-pytorch-port - rtmpose datasets: - coco - aic --- # RTMPose-tiny (COCO 17-keypoint) — acaua mirror (pure-PyTorch port) This is a **pure-PyTorch port** of [RTMPose-tiny](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose) hosted under `CondadosAI/` for use with the [acaua](https://github.com/CondadosAI/acaua) computer vision library. The architecture has been re-implemented in pure PyTorch under `acaua.adapters.rtmpose` — no `mmcv`, no `mmengine`, no `mmpose`, no `trust_remote_code`. The weights in this mirror are converted from the upstream `.pth` checkpoint to safetensors with the acaua adapter's state_dict key naming, and load cleanly via `load_state_dict(strict=True)` into our nn.Module tree. RTMPose is a **top-down** model: it consumes a person bounding box and predicts COCO 17-keypoint pose. The acaua adapter bundles [`CondadosAI/rtmdet_t_coco`](https://huggingface.co/CondadosAI/rtmdet_t_coco) as the person detector, giving you a single-call `predict(image)` API that returns boxes + keypoints together. ## Provenance | | | |---|---| | Upstream code (architecture) | [`open-mmlab/mmpose`](https://github.com/open-mmlab/mmpose) @ `759b39c13fea6ba094afc1fa932f51dc1b11cbf9` | | Upstream code (backbone) | [`open-mmlab/mmdetection`](https://github.com/open-mmlab/mmdetection) @ `cfd5d3a985b0249de009b67d04f37263e11cdf3d` (CSPNeXt) | | Upstream weights URL | `https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-tiny_simcc-aic-coco_pt-aic-coco_420e-256x192-cfc8f33d_20230126.pth` | | Upstream weights SHA256 | `e84eb5b9ee9432259bdd19d6a01156604ba27139ca6373ddb4ee7aa290d528e9` | | Conversion script | [`scripts/convert_rtmpose.py`](https://github.com/CondadosAI/acaua/blob/main/scripts/convert_rtmpose.py) | | Bundled detector | [`CondadosAI/rtmdet_t_coco`](https://huggingface.co/CondadosAI/rtmdet_t_coco) | | Paper | Jiang et al., *"RTMPose: Real-Time Multi-Person Pose Estimation based on MMPose"*, arXiv:[2303.07399](https://arxiv.org/abs/2303.07399) | | COCO val AP | 68.5 @ 256×192 (top-down, 17 keypoints) | | Mirrored on | 2026-04-22 | | Mirrored by | [CondadosAI/acaua](https://github.com/CondadosAI/acaua) | ## Usage ```python import acaua import supervision as sv model = acaua.Model.from_pretrained("CondadosAI/rtmpose_t_coco") result = model.predict("photo.jpg") # `result` is a PoseResult: boxes (from RTMDet), keypoints (from RTMPose). kp = result.to_supervision() # supervision.KeyPoints sv.EdgeAnnotator(edges=model.skeleton).annotate(scene, kp) ``` ## License and attribution Redistributed under Apache-2.0, consistent with both the upstream code (mmpose / mmdetection, both Apache-2.0 by OpenMMLab) and the upstream weights declaration. The acaua adapter is a derivative work of the upstream PyTorch implementations — see [`NOTICE`](./NOTICE) for the required attribution chain (code AND weights). ## Citation ```bibtex @misc{jiang2023rtmpose, title={RTMPose: Real-Time Multi-Person Pose Estimation based on MMPose}, author={Tao Jiang and Peng Lu and Li Zhang and Ningsheng Ma and Rui Han and Chengqi Lyu and Yining Li and Kai Chen}, year={2023}, eprint={2303.07399}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```