docs: add NOTICE attribution per Apache-2.0 s4(d) (code AND weights)
Browse files
NOTICE
ADDED
|
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
RTMO-s (body7) — acaua mirror (pure-PyTorch port)
|
| 2 |
+
====================================================
|
| 3 |
+
|
| 4 |
+
This product includes:
|
| 5 |
+
|
| 6 |
+
1. PORTED SOURCE CODE: a pure-PyTorch port of the RTMO architecture
|
| 7 |
+
(located in the acaua repository at src/acaua/adapters/rtmo/) is a
|
| 8 |
+
derivative work of:
|
| 9 |
+
|
| 10 |
+
- OpenMMLab's mmpose implementation
|
| 11 |
+
https://github.com/open-mmlab/mmpose @ commit
|
| 12 |
+
759b39c13fea6ba094afc1fa932f51dc1b11cbf9 — Apache-2.0
|
| 13 |
+
Files derived from:
|
| 14 |
+
mmpose/models/backbones/csp_darknet.py
|
| 15 |
+
mmpose/models/necks/hybrid_encoder.py
|
| 16 |
+
mmpose/models/heads/hybrid_heads/rtmo_head.py
|
| 17 |
+
mmpose/models/heads/hybrid_heads/yoloxpose_head.py
|
| 18 |
+
mmpose/models/utils/csp_layer.py
|
| 19 |
+
mmpose/models/utils/reparam_layers.py
|
| 20 |
+
mmpose/models/utils/transformer.py
|
| 21 |
+
mmpose/evaluation/functional/nms.py
|
| 22 |
+
- Plus mmcv primitives (ConvModule, DepthwiseSeparableConvModule,
|
| 23 |
+
FFN, MultiheadAttention, Scale) vendored as pure-PyTorch
|
| 24 |
+
equivalents.
|
| 25 |
+
|
| 26 |
+
Paper: Peng Lu, Tao Jiang, Yining Li, Xiangtai Li, Kai Chen,
|
| 27 |
+
Wenming Yang, "RTMO: Towards High-Performance One-Stage Real-Time
|
| 28 |
+
Multi-Person Pose Estimation", CVPR 2024 (arXiv:2312.07526).
|
| 29 |
+
|
| 30 |
+
2. CONVERTED WEIGHTS: the model.safetensors file in this mirror is a
|
| 31 |
+
key-remapped conversion of the upstream pretrained checkpoint:
|
| 32 |
+
|
| 33 |
+
- upstream URL: https://download.openmmlab.com/mmpose/v1/projects/rtmo/rtmo-s_8xb32-600e_body7-640x640-dac2bf74_20231211.pth
|
| 34 |
+
- upstream SHA256: dac2bf749bbfb51e69ca577ca0327dff4433e3be9a56b782f0b7ef94fb45247e
|
| 35 |
+
- upstream paper: Lu et al., CVPR 2024 (arXiv:2312.07526)
|
| 36 |
+
- training set: body7 = COCO + AI Challenger + CrowdPose + MPII
|
| 37 |
+
+ sub-JHMDB + Halpe + PoseTrack18
|
| 38 |
+
|
| 39 |
+
Conversion was performed by scripts/convert_rtmo.py in the acaua
|
| 40 |
+
repository. The conversion is deterministic and reversible: state
|
| 41 |
+
dict keys can be remapped back and are lossless (no quantization
|
| 42 |
+
or pruning applied). Training-only loss-module buffers
|
| 43 |
+
(head.loss_oks.sigmas, etc.) and data-preprocessor buffers
|
| 44 |
+
(data_preprocessor.mean, .std) are stripped — they have no role
|
| 45 |
+
at inference and are regenerated from config at training time.
|
| 46 |
+
|
| 47 |
+
Mirrored on 2026-04-22 by CondadosAI.
|
| 48 |
+
|
| 49 |
+
License
|
| 50 |
+
-------
|
| 51 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
| 52 |
+
you may not use this file except in compliance with the License.
|
| 53 |
+
You may obtain a copy of the License at
|
| 54 |
+
|
| 55 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 56 |
+
|
| 57 |
+
Unless required by applicable law or agreed to in writing, software
|
| 58 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
| 59 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
| 60 |
+
implied. See the License for the specific language governing
|
| 61 |
+
permissions and limitations under the License.
|