File size: 2,811 Bytes
f7bf367
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
RTMO-s (body7) — acaua mirror (pure-PyTorch port)
====================================================

This product includes:

1. PORTED SOURCE CODE: a pure-PyTorch port of the RTMO architecture
   (located in the acaua repository at src/acaua/adapters/rtmo/) is a
   derivative work of:

     - OpenMMLab's mmpose implementation
       https://github.com/open-mmlab/mmpose @ commit
       759b39c13fea6ba094afc1fa932f51dc1b11cbf9 — Apache-2.0
       Files derived from:
         mmpose/models/backbones/csp_darknet.py
         mmpose/models/necks/hybrid_encoder.py
         mmpose/models/heads/hybrid_heads/rtmo_head.py
         mmpose/models/heads/hybrid_heads/yoloxpose_head.py
         mmpose/models/utils/csp_layer.py
         mmpose/models/utils/reparam_layers.py
         mmpose/models/utils/transformer.py
         mmpose/evaluation/functional/nms.py
     - Plus mmcv primitives (ConvModule, DepthwiseSeparableConvModule,
       FFN, MultiheadAttention, Scale) vendored as pure-PyTorch
       equivalents.

   Paper: Peng Lu, Tao Jiang, Yining Li, Xiangtai Li, Kai Chen,
   Wenming Yang, "RTMO: Towards High-Performance One-Stage Real-Time
   Multi-Person Pose Estimation", CVPR 2024 (arXiv:2312.07526).

2. CONVERTED WEIGHTS: the model.safetensors file in this mirror is a
   key-remapped conversion of the upstream pretrained checkpoint:

     - upstream URL:    https://download.openmmlab.com/mmpose/v1/projects/rtmo/rtmo-s_8xb32-600e_body7-640x640-dac2bf74_20231211.pth
     - upstream SHA256: dac2bf749bbfb51e69ca577ca0327dff4433e3be9a56b782f0b7ef94fb45247e
     - upstream paper:  Lu et al., CVPR 2024 (arXiv:2312.07526)
     - training set:    body7 = COCO + AI Challenger + CrowdPose + MPII
                        + sub-JHMDB + Halpe + PoseTrack18

   Conversion was performed by scripts/convert_rtmo.py in the acaua
   repository. The conversion is deterministic and reversible: state
   dict keys can be remapped back and are lossless (no quantization
   or pruning applied). Training-only loss-module buffers
   (head.loss_oks.sigmas, etc.) and data-preprocessor buffers
   (data_preprocessor.mean, .std) are stripped — they have no role
   at inference and are regenerated from config at training time.

Mirrored on 2026-04-22 by CondadosAI.

License
-------
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License.