SlowGuess's picture
Add Batch bc1ea508-71ea-44e2-bc59-1e612a60d4fa
f132767 verified

A Characteristic Function-based Method for Bottom-up Human Pose Estimation

Haoxuan $\mathrm{Qu}^{1}$ , Yujun $\mathrm{Cai}^{2}$ , Lin Geng $\mathrm{Foo}^{1}$ , Ajay Kumar $^{3}$ , Jun Liu $^{1,*}$

$^{1}$ Singapore University of Technology and Design, Singapore

$^{2}$ Nanyang Technological University, Singapore

3The Hong Kong Polytechnic University, Hong Kong

haoxuan-qu@mymail.sutd.edu.sg, yujun001@e.ntu.edu.sg, lingeng.foo@mymail.sutd.edu.sg

ajay.kumar@polyu.edu.hk, jun.liu@sutd.edu.sg

Abstract

Most recent methods formulate the task of human pose estimation as a heatmap estimation problem, and use the overall L2 loss computed from the entire heatmap to optimize the heatmap prediction. In this paper, we show that in bottom-up human pose estimation where each heatmap often contains multiple body joints, using the overall L2 loss to optimize the heatmap prediction may not be the optimal choice. This is because, minimizing the overall L2 loss cannot always lead the model to locate all the body joints across different sub-regions of the heatmap more accurately. To cope with this problem, from a novel perspective, we propose a new bottom-up human pose estimation method that optimizes the heatmap prediction via minimizing the distance between two characteristic functions respectively constructed from the predicted heatmap and the groundtruth heatmap. Our analysis presented in this paper indicates that the distance between these two characteristic functions is essentially the upper bound of the L2 losses w.r.t. sub-regions of the predicted heatmap. Therefore, via minimizing the distance between the two characteristic functions, we can optimize the model to provide a more accurate localization result for the body joints in different sub-regions of the predicted heatmap. We show the effectiveness of our proposed method through extensive experiments on the COCO dataset and the CrowdPose dataset.

1. Introduction

Human pose estimation aims to locate the body joints of each person in a given RGB image. It is relevant to various applications, such as action recognition [7, 43], person Re-ID [28], and human object interaction [35]. For tackling human pose estimation, most of the recent methods fall

into two major categories: top-down methods and bottom-up methods. Top-down methods [24,32,33,39,44] generally use a human detector to detect all the people in the image, and then perform single-person pose estimation for each detected subject separately. In contrast, bottom-up methods [5,6,16,17,22,23,25,26] usually locate the body joints of all people in the image at the same time. Hence, bottom-up methods, the main focus of this paper, are often a more efficient choice compared to top-down methods, especially when there are many people in the input image [5].

In existing works, it is common to regard human pose estimation as a heatmap prediction problem, since this can preserve the spatial structure of the input image throughout the encoding and decoding process [12]. During the general optimization process, the groundtruth (GT) heatmaps $\mathbf{H}_g$ are first constructed via putting 2D Gaussian blobs centered at the GT coordinates of the body joints. After that, these constructed GT heatmaps are used to supervise the predicted heatmaps $\mathbf{H}_p$ via the overall L2 loss $L_2^{overall}$ calculated (averaged) over the whole heatmap. Specifically, denoting the area of the heatmap as $A$ , we have $L_2^{overall} = \frac{|\mathbf{H}_p - \mathbf{H}_g|_2^2}{A}$ .

We argue that using the overall L2 loss to supervise the predicted heatmap may not be the optimal choice in bottom-up methods where each heatmap often contains multiple body joints from the multiple people in various sub-regions, as shown in Fig. 1(b). This is because, a smaller overall L2 loss calculated over the whole heatmap cannot always lead the model to locate all the body joints across different sub-regions in the heatmap more accurately. As illustrated in Fig. 1(a), the predicted heatmap #2 has a smaller overall L2 loss compared to the predicted heatmap #1. However, the predicted heatmap #2 locates the body joint in the top-right sub-region wrongly, whereas the predicted heatmap #1 locates body joints in both the top-right and bottom-left sub-regions correctly. This is because, while the decrease of the overall L2 loss can be achieved when the L2 loss w.r.t. each sub-region either decreases or remains the same, this is not necessarily true for all regions.


(a)


(b)
Figure 1. (a) Illustration of heatmaps. The predicted heatmap #2 with a smaller overall L2 loss locates the body joint in the top-right sub-region wrongly, while the predicted heatmap #1 with a larger overall L2 loss locates body joints in both the top-right and bottom-left sub-regions correctly. (b) Output of a commonly used bottom-up method, HrHRNet-W32 [6]. As shown, it misses left ankle in the dashed sub-region of image (i) completely, and misidentifies right knee in the dashed sub-region of image (ii). This indicates that accurately localizing the body joints of multiple people in a single heatmap is a challenging problem. (Best viewed in color.)

same (e.g., from predicted heatmap #0 to predicted heatmap #1), it can also be achieved when there is a decrease of L2 loss w.r.t. certain sub-regions and an increase of L2 loss for some other sub-regions (e.g., from predicted heatmap #1 to predicted heatmap #2). This indicates that, in bottom-up methods, the decrease of the overall L2 loss does not always lead to a more accurate localization result for the body joints in different sub-regions of the predicted heatmap at the same time. Besides, we also show some results of a commonly used bottom-up method, HrHRNet-W32 [6], in Fig. 1(b). As shown, it may even miss or misidentify certain body joints when there are a number of people in the input image. This indicates that it is quite difficult to accurately locate all body joints of all people in the predicted heatmap.

To tackle the above-mentioned problem in bottom-up methods, in this paper, rather than using the overall L2 loss to supervise the whole heatmap, we instead aim to optimize the body joints over sub-regions of the predicted heatmap at the same time. To this end, from a new perspective, we express the predicted and GT heatmaps as characteristic functions, and minimize the difference between these functions, allowing different sub-regions of the predicted heatmap to be optimized at the same time.

More specifically, we first construct two distributions respectively from the predicted heatmap and the GT heatmap. After that, we obtain two characteristic functions of these two distributions and optimize the heatmap prediction via minimizing the distance between these two characteristic functions. We analyze in Sec. 3.3 that the distance between the two characteristic functions is the upper bound of the

L2 losses w.r.t sub-regions in the predicted heatmap. Therefore, via minimizing the distance between the two characteristic functions, our method can locate body joints in different sub-regions more accurately at the same time, and thus achieve superior performance.

The contributions of our work are summarized as follows. 1) From a new perspective, we supervise the predicted heatmap using the distance between the characteristic functions of the predicted and GT heatmaps. 2) We analyze (in Sec. 3.3) that the L2 losses w.r.t. sub-regions of the predicted heatmap are upper-bounded by the distance between the characteristic functions. 3) Our proposed method achieves state-of-the-art performance on the evaluation benchmarks [19, 21].

2. Related Work

Human Pose Estimation. Due to the wide range of applications, human pose estimation has received lots of attention [5, 6, 16, 17, 22-26, 29, 32, 33, 39, 44], and most of the recent methods fall into two categories: top-down methods and bottom-up methods. In top-down methods, a human detector is generally used to detect all the people in the image first, and then single-person pose estimation is conducted for each detected subject separately. The single-person pose estimation methods that are commonly used in top-down methods include Hourglass [24], Simple Baseline [39], HRNet [32], and HRFormaler [44], etc. Besides top-down methods, bottom-up methods [5, 6, 16, 17, 22, 23, 25, 26] have also attracted a lot of attention recently due to its efficiency [5].

In bottom-up methods, most methods first detect all

identity-free body joints over the whole input image, and then group them into different people. Among these methods, DeepCut and Person-Lab [14,26,27] incorporate offset fields into their methods, while Openpose and PifPaf [5,17] make use of part affinity fields in their methods. From another perspective, associate embedding [23] teaches the model to output the group assignments and the localization results of the body joints at the same time, and HGG [16] further combines graph neural networks on top of the associate embedding. Besides the above methods, there also exist some bottom-up methods [11,25,45] that directly regress the coordinates of body joints belonging to the same person.

Existing heatmap-based bottom-up methods often use an overall L2 loss calculated over the whole heatmap to optimize heatmap prediction. Differently, in this paper, we propose a new bottom-up method that optimizes the heatmap prediction via minimizing the difference between the characteristic functions of the predicted and GT heatmaps.

Characteristic Function. The characteristic function, a concept originally proposed in probability theory and statistics, has been studied in various areas [2,8,9,13,20,31,41] over the years, such as two-sample testing [8,9,13], generative adversarial nets [2,20], and few-shot classification [41]. Inspired by these works, in this paper, from a novel perspective, we propose to optimize the heatmap prediction for bottom-up human pose estimation via minimizing the distance between two characteristic functions. We theoretically analyze that the distance between the two characteristic functions respectively constructed from the predicted heatmap and the GT heatmap is the upper bound of the L2 losses w.r.t. sub-regions of the predicted heatmap.

3. Method

In bottom-up human pose estimation, as shown in Fig. 1(a), minimizing the overall L2 loss between the predicted heatmap and the GT heatmap cannot always lead the model to locate all the body joints across different sub-regions of the heatmap more accurately. In this work, we aim to optimize the body joints over sub-regions of the predicted heatmap at the same time. To achieve this, we propose a new bottom-up method that optimizes the heatmap prediction via minimizing the distance between two characteristic functions constructed from the predicted and GT heatmaps.

Below, we first briefly introduce the characteristic function, and then discuss how we formulate the heatmap optimization process. After that, we show the theoretical analysis of our proposed method.

3.1. Revisiting Characteristic Function

The characteristic function is generally used in probability theory and statistics. Given an $N$ -dimensional distribution $D$ , its corresponding characteristic function $\varphi_{D}$ can be

written as:

Ο†D(t)=Ex∼D[ei⟨t,x⟩]=∫RNei⟨t,x⟩dD(1) \varphi_ {D} (\mathbf {t}) = E _ {\mathbf {x} \sim D} [ e ^ {i \langle \mathbf {t}, \mathbf {x} \rangle} ] = \int_ {\mathbb {R} ^ {N}} e ^ {i \langle \mathbf {t}, \mathbf {x} \rangle} d D \tag {1}

where $E$ represents expectation, $i^2 = -1$ , $\langle \cdot, \cdot \rangle$ represents dot product, $\mathbf{t}$ is a random $N$ -dimensional vector, and $\mathbf{x}$ is an $N$ -dimensional vector sampled from $D$ . Note that the characteristic function always exists and has a one-to-one correspondence with the distribution. Besides, the characteristic function is the Fourier transform of the probability density function if the latter exists as well. Moreover, the characteristic function is always finite and bounded ( $|\varphi_D(\mathbf{t})| \leq 1$ ). This makes calculation of the distance between two characteristic functions always meaningful.

3.2. Proposed Heatmap Optimization Process

Below, we discuss how we formulate the heatmap optimization process for bottom-up human pose estimation via (1) constructing two distributions from the predicted heatmap and the GT heatmap respectively; (2) calculating characteristic functions from these two distributions; and (3) formulating the loss function as the distance between the two characteristic functions.

Distribution Construction. Given an input image, for each type of body joints, we denote the corresponding predicted heatmap as $H_{p}$ and the corresponding GT heatmap as $H_{g}$ . We propose to formulate the two distributions $D(H_{p})$ and $D(H_{g})$ from the two heatmaps $H_{p}$ and $H_{g}$ with the following two steps. (1) As distributions cannot hold negative probabilities, we first pass $H_{p}$ through a relu activation function to make it non-negative. Note that $H_{g}$ is already non-negative. (2) After that, as the sum of probabilities of each constructed distribution needs to be 1, we further normalize both the output of step (1) and $H_{g}$ . Hence, with the above two steps, we formulate $D(H_{p})$ and $D(H_{g})$ as:

D(Hp)=relu⁑(Hp)βˆ₯relu⁑(Hp)βˆ₯1,D(Hg)=Hgβˆ₯Hgβˆ₯1(2) D \left(H _ {p}\right) = \frac {\operatorname {r e l u} \left(H _ {p}\right)}{\left\| \operatorname {r e l u} \left(H _ {p}\right) \right\| _ {1}}, \quad D \left(H _ {g}\right) = \frac {H _ {g}}{\left\| H _ {g} \right\| _ {1}} \tag {2}

Characteristic Function Calculation. For every type of body joints, after formulating the two distributions $D(H_{p})$ and $D(H_{g})$ , we follow Eq. 1 to calculate the two characteristic functions $\varphi_{D(H_p)}(\mathbf{t})$ and $\varphi_{D(H_g)}(\mathbf{t})$ as:

Ο†D(Hp)(t)=Ex∼D(Hp)[ei⟨t,x⟩],Ο†D(Hg)(t)=Ex∼D(Hg)[ei⟨t,x⟩](3) \begin{array}{l} \varphi_ {D \left(H _ {p}\right)} (\mathbf {t}) = E _ {\mathbf {x} \sim D \left(H _ {p}\right)} \left[ e ^ {i \langle \mathbf {t}, \mathbf {x} \rangle} \right], \tag {3} \\ \varphi_ {D (H _ {g})} (\mathbf {t}) = E _ {\mathbf {x} \sim D (H _ {g})} [ e ^ {i \langle \mathbf {t}, \mathbf {x} \rangle} ] \\ \end{array}

Loss Function Formulation. Above we discuss how we obtain the two characteristic functions w.r.t. the predicted heatmap and the GT heatmap for a single type of body joints. Note that in bottom-up human pose estimation, multiple types of body joints are required to be located at the same time. Here, we first discuss how we formulate the loss function for a single type of body joints, and then introduce the overall loss function for all types of body joints.

To formulate the loss function for the $k$ -th type of body

joints, given the two characteristic functions $\varphi_{D(H_p)}^k (\mathbf{t})$ and $\varphi_{D(H_g)}^k (\mathbf{t})$ , we first write the loss function $L_{k}$ as the distance between these two characteristic functions [2]:

Lk=∫R2βˆ₯Ο†D(Hp)k(t)βˆ’Ο†D(Hg)k(t)βˆ₯22Ο‰(t,Ξ·)dt(4) L _ {k} = \int_ {\mathbb {R} ^ {2}} \| \varphi_ {D (H _ {p})} ^ {k} (\mathbf {t}) - \varphi_ {D (H _ {g})} ^ {k} (\mathbf {t}) \| _ {2} ^ {2} \omega (\mathbf {t}, \eta) d \mathbf {t} \tag {4}

where $\omega (\mathbf{t},\eta)$ is a weighting function. Here we set $\omega (\mathbf{t},\eta)$ to be the probability density function of a uniform distribution in $B_{U}$ , where $B_{U} = [-U,U]\times [-U,U]$ is a finite predefined range and $U$ is a hyperparameter. This means that, $\omega (\mathbf{t},\eta) = \frac{1}{4U^2}$ when $\mathbf{t}\in B_U$ and $\omega (\mathbf{t},\eta) = 0$ otherwise. We thus further rewrite Eq. 4 as:

Lk=∫BUβˆ₯12U(Ο†D(Hp)k(t)βˆ’Ο†D(Hg)k(t))βˆ₯22dt(5) L _ {k} = \int_ {B _ {U}} \| \frac {1}{2 U} \left(\varphi_ {D (H _ {p})} ^ {k} (\mathbf {t}) - \varphi_ {D (H _ {g})} ^ {k} (\mathbf {t})\right) \| _ {2} ^ {2} d \mathbf {t} \tag {5}

Finally, from Eq. 5, we formulate the loss function $L_{k}$ as:

Lk=∫BUβˆ₯Ξ³2U(Ο†D(Hp)k(t)βˆ’Ο†D(Hg)k(t))βˆ₯22dt(6) L _ {k} = \int_ {B _ {U}} \| \frac {\gamma}{2 U} \left(\varphi_ {D (H _ {p})} ^ {k} (\mathbf {t}) - \varphi_ {D (H _ {g})} ^ {k} (\mathbf {t})\right) \| _ {2} ^ {2} d \mathbf {t} \tag {6}

where $\gamma = \frac{U^2\sqrt{A}}{\pi^2}$ is a constant coefficient and $A$ is the area of the heatmap. Note that Eq. 6 is equivalent to Eq. 5 during the optimization process, as the efficacy of the added constant $\gamma$ can be achieved by adjusting the learning rate.

After getting the loss function for each type of body joints, we formulate the total loss for all types of joints as:

Ltotal=βˆ‘k=1KLk(7) L _ {t o t a l} = \sum_ {k = 1} ^ {K} L _ {k} \tag {7}

where $K$ denotes the total number of body joint types.

3.3. Theoretical Analysis

Below, we perform theoretical analysis to show the effectiveness of our method for bottom-up human pose estimation. Before going into the theorem, we first introduce a lemma that can facilitate the proof of the theorem.

Lemma 1. Let $\varphi_{D}$ be the characteristic function of a 2-dimensional distribution $D$ . Let $R^{r} = [x_{1}^{lower}, x_{1}^{upper}] \times [x_{2}^{lower}, x_{2}^{upper}]$ a rectangular region, $R^{e} = {x_{1}^{lower}, x_{1}^{upper}} \times [x_{2}^{lower}, x_{2}^{upper}]\cup [x_{1}^{lower}, x_{1}^{upper}] \times {x_{2}^{lower}, x_{2}^{upper}}$ the edges of this region, and $R^{v} = {x_{1}^{lower}, x_{1}^{upper}} \times {x_{2}^{lower}, x_{2}^{upper}}$ the vertices of this region. Let $B_{T} = [-T,T] \times [-T,T]$ . Denote $[D]{R}$ the portion of the distribution $D$ in $R$ . $[D]{R^r}$ can then be written as:

[D]Rr=(lim⁑Tβ†’βˆž1(2Ο€)2∫BT(∏n=12(eβˆ’itnxnl o w e rβˆ’eβˆ’itnxnu p p e ritn)Ο†D(t))dt1dt2)+Ο΅([D]Rr)(8) \begin{array}{l} [ D ] _ {R ^ {r}} = \left(\lim _ {T \rightarrow \infty} \frac {1}{(2 \pi) ^ {2}} \int_ {B _ {T}} \left(\prod_ {n = 1} ^ {2} \left(\frac {e ^ {- i t _ {n} x _ {n} ^ {\text {l o w e r}}} - e ^ {- i t _ {n} x _ {n} ^ {\text {u p p e r}}}}{i t _ {n}}\right)\right.\right. \\ \left. \varphi_ {D} (\boldsymbol {t})\right) d t _ {1} d t _ {2}) + \epsilon ([ D ] _ {R ^ {r}}) \tag {8} \\ \end{array}

where $\epsilon ([D]{R^r}) = \frac{[D]{R^e}}{2} +\frac{[D]{R^v}}{4}$ and $dt{1}dt_{2}$ are calculated based on the Lebesgue measure.

The proof of Lemma 1 is provided in the supplementary. After introducing this lemma, we analyze our proposed method below.

Theorem 1. Let $R_{sub}^{r}$ be a random rectangular sub-region in the heatmap of the $k$ -th type of body joints where $\left| [D(H_p)]{R{sub}^e} - [D(H_g)]{R{sub}^e}\right|2^2$ is relatively small compared to $\left| [D(H_p)]{R_{sub}^r} - [D(H_g)]{R{sub}^r}\right|_2^2$ . The relation between the L2 loss w.r.t. this sub-region and $L_k$ can be written as:

βˆ₯[D(Hp)]Rsubrβˆ’[D(Hg)]Rsubrβˆ₯22Ξ»(Rsubr)≀Lk(9) \frac {\left\| \left[ D \left(H _ {p}\right) \right] _ {R _ {s u b} ^ {r}} - \left[ D \left(H _ {g}\right) \right] _ {R _ {s u b} ^ {r}} \right\| _ {2} ^ {2}}{\lambda \left(R _ {s u b} ^ {r}\right)} \leq L _ {k} \tag {9}

Note that $\lambda(R_{sub}^{r})$ as the Lebesgue measure represents the area of $R_{sub}^{r}$ .

Proof. To prove Theorem 1, we first reformulate Lemma 1 as:

[D]Rr=(lim⁑Tβ†’βˆž1(2Ο€)2∫BT(∏n=12(eβˆ’itnxnlowerβˆ’eβˆ’itnxnupperitn)Ο†D(t))dt1dt2)+Ο΅([D]Rr)=lim⁑Tβ†’βˆž1(2Ο€)2∫BTΟ†D(t)∫Rreβˆ’i⟨t,x⟩dxdt+Ο΅([D]Rr)(10) \begin{array}{l} [ D ] _ {R ^ {r}} = \left(\lim _ {T \rightarrow \infty} \frac {1}{(2 \pi) ^ {2}} \int_ {B _ {T}} \left(\prod_ {n = 1} ^ {2} \left(\frac {e ^ {- i t _ {n} x _ {n} ^ {l o w e r}} - e ^ {- i t _ {n} x _ {n} ^ {u p p e r}}}{i t _ {n}}\right)\right.\right. \\ \left. \varphi_ {D} (\mathbf {t})\right) d t _ {1} d t _ {2}) + \epsilon \left(\left[ D \right] _ {R ^ {r}}\right) \\ = \lim _ {T \rightarrow \infty} \frac {1}{(2 \pi) ^ {2}} \int_ {B _ {T}} \varphi_ {D} (\mathbf {t}) \int_ {R ^ {r}} e ^ {- i \langle \mathbf {t}, \mathbf {x} \rangle} d \mathbf {x} d \mathbf {t} + \epsilon ([ D ] _ {R ^ {r}}) \tag {10} \\ \end{array}

where $d\mathbf{t} = dt_1dt_2$ , and both $d\mathbf{x}$ and $d\mathbf{t}$ are calculated based on the Lebesgue measure.

After that, we rewrite $| [D(H_p)]{R{sub}^r} - [D(H_g)]{R{sab}^r}| _2^2$ as:

βˆ₯[D(Hp)]Rsubrβˆ’[D(Hg)]Rsubrβˆ₯22(11)β‰ˆβˆ₯[D(Hp)]Rsubrβˆ’[D(Hg)]Rsubr(12)βˆ’(Ο΅([D(Hp)]Rsubr)βˆ’Ο΅([D(Hg)]Rsubr))βˆ₯22=βˆ₯lim⁑Tβ†’βˆž1(2Ο€)2∫BTΟ†D(Hp)k(t)∫Rsubreβˆ’i⟨t,x⟩dxdt(13)βˆ’lim⁑Tβ†’βˆž1(2Ο€)2∫BTΟ†D(Hg)k(t)∫Rsubreβˆ’i⟨t,x⟩dxdtβˆ₯22=βˆ₯lim⁑Tβ†’βˆžβˆ«BT∫RsubrΟ†D(Hp)k(t)βˆ’Ο†D(Hg)k(t)(2Ο€)2eβˆ’i⟨t,x⟩dxdtβˆ₯22(14)β‰ˆβˆ₯∫BU∫RsubrΟ†D(Hp)k(t)βˆ’Ο†D(Hg)k(t)(2Ο€)2eβˆ’i⟨t,x⟩dxdtβˆ₯22(15)≀4U2A∫BU∫Rsubrβˆ₯Ο†D(Hp)k(t)βˆ’Ο†D(Hg)k(t)(2Ο€)2eβˆ’i⟨t,x⟩βˆ₯22dxdt(16) \begin{array}{l} \left\| \left[ D \left(H _ {p}\right) \right] _ {R _ {s u b} ^ {r}} - \left[ D \left(H _ {g}\right) \right] _ {R _ {s u b} ^ {r}} \right\| _ {2} ^ {2} (11) \\ \approx \left\| \left[ D \left(H _ {p}\right) \right] _ {R _ {s u b} ^ {r}} - \left[ D \left(H _ {g}\right) \right] _ {R _ {s u b} ^ {r}} \right. (12) \\ - \left(\epsilon ([ D (H _ {p}) ] _ {R _ {s u b} ^ {r}}) - \epsilon ([ D (H _ {g}) ] _ {R _ {s u b} ^ {r}})\right) \| _ {2} ^ {2} \\ = \| \lim _ {T \rightarrow \infty} \frac {1}{(2 \pi) ^ {2}} \int_ {B _ {T}} \varphi_ {D (H _ {p})} ^ {k} (\mathbf {t}) \int_ {R _ {s u b} ^ {r}} e ^ {- i \langle \mathbf {t}, \mathbf {x} \rangle} d \mathbf {x} d \mathbf {t} (13) \\ - \lim _ {T \rightarrow \infty} \frac {1}{(2 \pi) ^ {2}} \int_ {B _ {T}} \varphi_ {D (H _ {g})} ^ {k} (\mathbf {t}) \int_ {R _ {s u b} ^ {r}} e ^ {- i \langle \mathbf {t}, \mathbf {x} \rangle} d \mathbf {x} d \mathbf {t} \| _ {2} ^ {2} \\ = \| \lim _ {T \rightarrow \infty} \int_ {B _ {T}} \int_ {R _ {s u b} ^ {r}} \frac {\varphi_ {D (H _ {p})} ^ {k} (\mathbf {t}) - \varphi_ {D (H _ {g})} ^ {k} (\mathbf {t})}{(2 \pi) ^ {2}} e ^ {- i \langle \mathbf {t}, \mathbf {x} \rangle} d \mathbf {x} d \mathbf {t} \| _ {2} ^ {2} (14) \\ \approx \left\| \int_ {B _ {U}} \int_ {R _ {s u b} ^ {r}} \frac {\varphi_ {D (H _ {p})} ^ {k} (\mathbf {t}) - \varphi_ {D (H _ {g})} ^ {k} (\mathbf {t})}{(2 \pi) ^ {2}} e ^ {- i \langle \mathbf {t}, \mathbf {x} \rangle} d \mathbf {x} d \mathbf {t} \right\| _ {2} ^ {2} (15) \\ \leq 4 U ^ {2} A \int_ {B _ {U}} \int_ {R _ {s u b} ^ {r}} \| \frac {\varphi_ {D (H _ {p})} ^ {k} (\mathbf {t}) - \varphi_ {D (H _ {g})} ^ {k} (\mathbf {t})}{(2 \pi) ^ {2}} e ^ {- i \langle \mathbf {t}, \mathbf {x} \rangle} \| _ {2} ^ {2} d \mathbf {x} d \mathbf {t} (16) \\ \end{array}

Table 1. Comparisons with bottom-up methods on the COCO val2017 set (single-scale testing).

MethodVenueBackboneInput sizeAP\( AP^{50} \)\( AP^{75} \)\( AP^M \)\( AP^L \)
OpenPose [5]CVPR 2017VGG-19-61.084.967.556.369.3
HGG [16]ECCV 2020Hourglass51260.483.066.2--
PersonLab [26]ECCV 2018ResNet-152140166.586.271.962.373.2
PifPaf [17]CVPR 2019ResNet-152-67.4----
PETR [30]CVPR 2022-133367.487.074.961.775.9
DEKR [11]CVPR 2021HRNet-W4864071.088.377.466.778.5
PINet [37]NIPS 2021HRNet-W3251267.4----
CIR&QEM [40]AAAI 2022HRNet-W4864072.489.1-67.380.4
CID [36]CVPR 2022HRNet-W3251266.086.772.359.876.0
LOGP-CAP [42]CVPR 2022HRNet-W4864072.288.978.968.178.9
SWAHR [22]CVPR 2021HrHRNet-W3251268.987.874.963.077.4
SWAHR [22]CVPR 2021HrHRNet-W4864070.888.576.866.377.4
CenterAttention [4]ICCV 2021HrHRNet-W3251268.687.674.162.078.0
PoseTrans [15]ECCV 2022HrHRNet-W3251268.487.174.862.777.1
HrHRNet [6]CVPR 2020HrHRNet-W3251267.186.273.061.576.1
+ OursHrHRNet-W3251269.9(↑2.8)88.176.064.278.1
HrHRNet [6]CVPR 2020HrHRNet-W4864069.987.276.165.476.4
+ OursHrHRNet-W4864072.5(↑2.6)89.379.168.379.0

Table 2. Comparisons with bottom-up methods on the COCO val2017 set (multi-scale testing).

MethodVenueBackboneInput sizeAP\(AP^{50}\)\(AP^{75}\)\(AP^M\)\(AP^L\)
HGG [16]ECCV 2020Hourglass51268.386.775.8--
Point-Set Anchors [38]ECCV 2020HRNet-W4864069.888.876.3--
DEKR [11]CVPR 2021HRNet-W4864072.388.378.668.678.6
SWAHR [22]CVPR 2021HrHRNet-W3251271.488.977.866.378.9
SWAHR [22]CVPR 2021HrHRNet-W4864073.289.879.169.179.3
PoseTrans [15]ECCV 2022HrHRNet-W3251271.288.277.266.578.0
HrHRNet [6]CVPR 2020HrHRNet-W3251269.987.176.065.377.0
+ OursHrHRNet-W3251271.8(↑1.9)88.978.167.378.4
HrHRNet [6]CVPR 2020HrHRNet-W4864072.188.478.267.878.3
+ OursHrHRNet-W4864073.7(↑1.6)89.979.669.679.5

≀4U2A∫BU∫Rsubrβˆ₯Ο†D(Hp)k(t)βˆ’Ο†D(Hg)k(t)(2Ο€)2βˆ₯22dxdt(17)=4U2A∫Rsubr∫BUβˆ₯Ο†D(Hp)k(t)βˆ’Ο†D(Hg)k(t)(2Ο€)2βˆ₯22dtdx(18)=∫Rsubr∫BUβˆ₯Ξ³2U(Ο†D(Hp)k(t)βˆ’Ο†D(Hg)k(t))βˆ₯22dtdx(19)=LkΞ»(Rs u br)(20) \begin{array}{l} \leq 4 U ^ {2} A \int_ {B _ {U}} \int_ {R _ {s u b} ^ {r}} \| \frac {\varphi_ {D (H _ {p})} ^ {k} (\mathbf {t}) - \varphi_ {D (H _ {g})} ^ {k} (\mathbf {t})}{(2 \pi) ^ {2}} \| _ {2} ^ {2} d \mathbf {x} d \mathbf {t} (17) \\ = 4 U ^ {2} A \int_ {R _ {s u b} ^ {r}} \int_ {B _ {U}} \| \frac {\varphi_ {D (H _ {p})} ^ {k} (\mathbf {t}) - \varphi_ {D (H _ {g})} ^ {k} (\mathbf {t})}{(2 \pi) ^ {2}} \| _ {2} ^ {2} d \mathbf {t} d \mathbf {x} (18) \\ = \int_ {R _ {s u b} ^ {r}} \int_ {B _ {U}} \| \frac {\gamma}{2 U} \left(\varphi_ {D (H _ {p})} ^ {k} (\mathbf {t}) - \varphi_ {D (H _ {g})} ^ {k} (\mathbf {t})\right) \| _ {2} ^ {2} d \mathbf {t} d \mathbf {x} (19) \\ = L _ {k} \lambda \left(R _ {\text {s u b}} ^ {r}\right) (20) \\ \end{array}

where Eq. 12 holds since $| [D(H_p)]{R{sub}^e} - [D(H_g)]{R{sub}^e}| 2^2$ is relatively small compared to $| [D(H_p)]{R_{sub}^r} - [D(H_g)]{R{sub}^r}| _2^2$ , Eq. 13 holds because of Eq. 10, Eq. 15 holds based on the analysis in the supplementary, Eq. 16 holds due to the continuity of L2 distance and the Cauchy-Schwarz inequality, Eq. 17 holds due to the fact that $| e^{-i\langle \mathbf{t},\mathbf{x}\rangle}| _2^2 = 1$ and the Cauchy-Schwarz inequality, Eq. 18 holds due to Fubini's theorem.

We can then move $\lambda(R_{sub}^r)$ on the right hand side of Eq. 20 to the left hand side to get Theorem 1.

As shown in Theorem 1, for the sub-region $R_{sub}^{r}$ , when the sum of the pixelwise L2 distances between the predicted and GT heatmaps over this entire sub-region is relatively large compared to only over its edges, $L_{k}$ will be the upper bound of the L2 loss w.r.t. this sub-region. Because of this, via minimizing $L_{k}$ , we can enable the L2 losses w.r.t. all such sub-regions to be smaller. Note that such sub-regions can be easily found, since the edge of a sub-region typically contains many less pixels compared to the entire sub-region in the first place. Furthermore, for sub-regions containing missed or inaccurate body joints in its center, which are precisely the erroneous predictions that need to be corrected, the sum of the pixelwise L2 distances over the entire sub-region will then be much larger compared to only over its edge. Therefore, our method can optimize the model to provide a more accurate localization result for the body joints in different sub-regions of the predicted heatmap at the same time, whereas the existing bottom-up methods, usually relying on the overall L2 loss, do not hold this property. Thus, our method can achieve superior performance for bottom-up human pose estimation.

Note that during implementation, since $L_{k}$ itself as an integral is not tractable, inspired by [2], we define $\hat{L}_k$ as a

Table 3. Comparisons with bottom-up methods on the COCO test-dev2017 set (single-scale testing).

MethodVenueBackboneInput sizeAP\( AP^{50} \)\( AP^{75} \)\( AP^M \)\( AP^L \)
OpenPose [5]CVPR 2017VGG-19-61.884.967.557.168.2
Hourglass [24]ECCV 2016Hourglass51256.681.861.849.867.0
Associative Embedding [23]NIPS 2017Hourglass51256.681.861.849.867.0
SPM [25]ICCV 2019Hourglass-66.988.572.962.673.1
MDN [34]CVPR 2020Hourglass-62.985.169.458.871.4
PersonLab [26]ECCV 2018ResNet-152140166.588.072.662.472.3
PifPaf [17]CVPR 2019ResNet-152-66.7--62.472.9
PETR [30]CVPR 2022SWin-L133370.591.578.765.278.0
DEKR [11]CVPR 2021HRNet-W4864070.089.477.365.776.9
PINet [37]NIPS 2021HRNet-W3251266.7----
CIR&QEM [40]AAAI 2022HRNet-W4864071.090.278.266.277.8
CID [36]CVPR 2022HRNet-W4864070.790.377.966.377.8
LOGP-CAP [42]CVPR 2022HRNet-W4864070.889.777.866.777.0
SWAHR [22]CVPR 2021HrHRNet-W4864070.289.976.965.277.0
CenterAttention [4]ICCV 2021HrHRNet-W4864069.689.776.064.976.3
PoseTrans [15]ECCV 2022HrHRNet-W3251267.488.373.962.175.1
HrHRNet [6]CVPR 2020HrHRNet-W3251266.487.572.861.274.2
+ OursHrHRNet-W3251268.9(↑2.5)89.275.763.776.1
HrHRNet [6]CVPR 2020HrHRNet-W4864068.488.275.164.474.2
+ OursHrHRNet-W4864071.1(↑2.7)90.478.266.977.2

Table 4. Comparisons with bottom-up methods on the COCO test-dev2017 set (multi-scale testing).

MethodVenueBackboneInput sizeAP\( AP^{50} \)\( AP^{75} \)\( AP^M \)\( AP^L \)
Hourglass [24]ECCV 2016Hourglass51263.085.768.958.070.4
Associative Embedding [23]NIPS 2017Hourglass51263.085.768.958.070.4
HGG [16]ECCV 2020Hourglass51267.685.173.762.774.6
SimplePose [18]AAAI 2020IMHN51268.1--66.870.5
PersonLab [26]ECCV 2018-140168.789.075.464.175.5
PETR [30]CVPR 2022SWin-L133371.291.479.666.978.0
Point-Set Anchors [38]ECCV 2020HRNet-W4864068.789.976.364.875.3
DEKR [11]CVPR 2021HRNet-W4864071.089.278.067.176.9
CIR&QEM [40]AAAI 2022HRNet-W4864071.790.478.767.378.5
SWAHR [22]CVPR 2021HrHRNet-W4864072.090.778.867.877.7
CenterAttention [4]ICCV 2021HrHRNet-W4864071.190.577.566.976.7
PoseTrans [15]ECCV 2022HrHRNet-W3251269.989.377.065.276.2
HrHRNet [6]CVPR 2020HrHRNet-W3251269.089.075.864.475.2
+ OursHrHRNet-W3251270.8(↑1.8)90.177.866.077.3
HrHRNet [6]CVPR 2020HrHRNet-W4864070.589.377.266.675.8
+ OursHrHRNet-W4864072.3(↑1.8)91.579.867.978.2

tractable alternative of $L_{k}$ as:

L^k=βˆ‘m=1Mβˆ₯Ξ³2U(Ο†D(Hp)k(tm)βˆ’Ο†D(Hg)k(tm))βˆ₯22(21) \hat {L} _ {k} = \sum_ {m = 1} ^ {M} \| \frac {\gamma}{2 U} \left(\varphi_ {D (H _ {p})} ^ {k} (\mathbf {t} _ {m}) - \varphi_ {D (H _ {g})} ^ {k} (\mathbf {t} _ {m})\right) \| _ {2} ^ {2} \tag {21}

where ${\mathbf{t}_1,\dots ,\mathbf{t}M}$ denotes a set of $M$ vectors randomly sampled from $B{U}$ .

The total loss $\hat{L}_{total}$ for all body joint types can then be written as:

L^t o t a l=βˆ‘k=1KL^k(22) \hat {L} _ {\text {t o t a l}} = \sum_ {k = 1} ^ {K} \hat {L} _ {k} \tag {22}

3.4. Overall Training and Testing

Here we discuss the overall training and testing scheme of our method. Specifically, during training, we supervise the predicted heatmaps via the total loss in Eq. 22 instead of using the commonly used overall L2 loss, and following [6, 22, 23], we conduct grouping via associate embedding. During testing, we follow the evaluation procedure of

previous works [6, 22] that conduct bottom-up human pose estimation. Note that in experiments, it is easy to implement $\hat{L}_k$ in Eq. 21, and we provide more details on how we implement $\hat{L}_k$ in experiments in the supplementary.

4. Experiments

To evaluate the effectiveness of our method for bottom-up human pose estimation, we conduct experiments on the COCO dataset [21] and the CrowdPose dataset [19]. Besides, we also test the effectiveness of our method on top-down methods in the supplementary. We conduct our experiments on RTX 3090 GPUs.

4.1. COCO Keypoint Detection

Dataset & evaluation metric. The COCO dataset [21] contains over 200k images, and in this dataset, each person instance is annotated with 17 body joints. This dataset consists of three subsets including COCO training set (57k

Table 5. Comparisons with bottom-up methods on the CrowdPose testing set.

MethodVenueBackboneInput sizeAP\( AP^{50} \)\( AP^{75} \)\( APE \)\( APM \)\( AP^H \)
w/ single-scale testing
OpenPose [5]CVPR 2017VGG-19----62.748.732.3
HrHRNet [6]CVPR 2020HrHRNet-W4864065.986.470.673.366.557.9
PETR [30]CVPR 2022--72.090.978.878.072.565.4
DEKR [11]CVPR 2021HRNet-W4864067.386.472.274.668.158.7
PINet [37]NIPS 2021HRNet-W3251268.988.774.775.469.661.5
CID [36]CVPR 2022HRNet-W4864072.390.877.978.773.064.8
SWAHR [22]CVPR 2021HrHRNet-W4864071.688.577.678.972.463.0
CenterAttention [4]ICCV 2021HrHRNet-W4864067.687.772.773.968.260.3
OursHrHRNet-W4864072.688.878.979.273.165.6
w/ multi-scale testing
HrHRNet [6]CVPR 2020HrHRNet-W4864067.687.472.675.868.158.9
DEKR [11]CVPR 2021HRNet-W4864068.085.573.476.668.858.4
PINet [37]NIPS 2021HRNet-W3251269.889.175.676.470.562.2
SWAHR [22]CVPR 2021HrHRNet-W4864073.890.579.981.274.764.7
CenterAttention [4]ICCV 2021HrHRNet-W4864069.488.674.676.670.061.5
OursHrHRNet-W4864074.190.780.281.374.965.1

images), COCO validation set (5k images), and COCO test-dev set (20k images). Following the train-test split of [22], we report results on the val2017 set and test-dev2017 set. Also following [22], we evaluate model performance using standard average precision (AP) calculated based on Object Keypoint Similarity (OKS) on this dataset, and report the following metrics: AP, $\mathrm{AP}^{50}$ , $\mathrm{AP}^{75}$ , $\mathrm{AP}^{\mathrm{M}}$ , and $\mathrm{AP}^{\mathrm{L}}$ .

Implementation details. Following [4, 22], we use the HrHRNet [6] as the baseline, and apply our proposed method to the respective two backbones including HrHRNet-W32 and HrHRNet-W48. For these backbones, we follow their original training and testing configurations specified in [6]. Also following [6], we adopt three scales 0.5, 1, and 2 in multi-scale testing. To calculate $\hat{L}_k$ following Eq. 21, we set the number of samples $M$ to 256 and the hyperparameter $U$ w.r.t. the finite range $B_U$ to 64 in our experiments.

Results. In Tab. 1 and Tab. 2, we report single-scale testing and multi-scale testing results on the COCO val2017 set. In Tab. 3 and Tab. 4, we report single-scale testing and multi-scale testing results on the COCO test-dev2017 set. We observe that after applying our method on both HrHRNet-W32 and HrHRNet-W48, a significant performance improvement is achieved, demonstrating the effectiveness of our method. Moreover, we also compare our method with other state-of-the-art bottom-up human pose estimation methods. Compared to these methods, our method consistently achieves the highest AP score, further demonstrating the effectiveness of our method.

4.2. CrowdPose

Dataset & evaluation metric. The CrowdPose dataset [19] contains about 20k images and 80k person instances, which are annotated with 14 body joints. This dataset consists of three subsets including CrowdPose training set (10k images), CrowdPose validation set (2k images), and Crowd-

Pose testing set (8k images). Following the train-test split of [6, 22], we report results on the testing set. Also following [6, 22], we evaluate model performance using standard AP calculated based on OKS on the CrowdPose dataset, and report the following metrics: AP, $\mathrm{AP}^{50}$ , $\mathrm{AP}^{75}$ , $\mathrm{AP}^{\mathrm{E}}$ , $\mathrm{AP}^{\mathrm{M}}$ , and $\mathrm{AP}^{\mathrm{H}}$ .

Implementation details. On the CrowdPose dataset, we also use the HrHRNet [6] as the baseline, and we use HrHRNet-W48 as the backbone following [4,6,22]. We follow the original training and testing configurations specified in [6], and also follow [6] to adopt three scales 0.5, 1, and 2 in multi-scale testing. Besides, same as the experiments on the COCO dataset, we also set the number of samples $M$ to 256 and the hyperparameter $U$ w.r.t. the finite range $B_U$ to 64 on the CrowdPose dataset.

Results. In Tab. 5, we report the single-scale testing and multi-scale testing results on the CrowdPose testing set. As shown, our method consistently achieves the highest AP score, demonstrating the effectiveness of our method.

4.3. Ablation Studies

We conduct ablation studies on the COCO validation set via applying our proposed method on HrHRNet-W32 [6] with single-scale testing.

Impact of the number of samples $M$ . To calculate $\hat{L}_k$ following Eq. 21, we need to set the number of samples $M$ , which we set to 256 in our experiments. We evaluate other choices of the number of samples $M$ in Tab. 6. As shown, all variants outperform the baseline method, and after the number of samples $M$ becomes larger than 256 the model performance becomes stabilized. Therefore, we set the number of samples $M$ to be 256 in our experiments.

Impact of the finite range $B_U$ with different $U$ . We evaluate different choices of $U$ in Tab. 7. As shown, all variants outperform the baseline method, and after the hyperparameter $U$ becomes larger than 64, the model performance does

(a)(b)(c)(d)
Body joint name:Right shouldersRight eyesRight elbowsLeft shoulders
Baseline (HrHRNet-W32) result:
Ours result:

Figure 2. Qualitative results of our method and the baseline HrHRNet-W32 model [6]. As shown, the baseline method misses body joints (in (a) and (b)) or misidentifies body joints (in (c) and (d)) in some sub-regions of the predicted heatmap (see the sub-regions framed with dashed lines). Meanwhile, our method provides a more accurate localization result for the body joints of different people in different sub-regions of the predicted heatmap at the same time. More qualitative results are in the supplementary. (Best viewed in color.)

Table 6. Evaluation on the number of samples $M$

MethodAP\( AP^{50} \)\( AP^{75} \)\( AP^M \)\( AP^L \)
Baseline(HrHRNet-W32)67.186.273.061.576.1
4 samples67.986.973.862.476.9
16 samples68.987.574.863.577.4
64 samples69.687.975.663.977.8
256 samples69.988.176.064.278.1
1024 samples69.888.276.064.378.0

not enhance anymore. Thus, we set the hyperparameter $U$ to be 64 in our experiments.

Table 7. Evaluation on the hyperparameter $U$ w.r.t. the finite range ${B}_{U}$ .

MethodAP\( AP^{50} \)\( AP^{75} \)\( AP^M \)\( AP^L \)
Baseline(HrHRNet-W32)67.186.273.061.576.1
U = 867.786.773.562.276.5
U = 1668.687.374.262.976.9
U = 3269.487.875.463.777.6
U = 6469.988.176.064.278.1
U = 12869.888.075.864.078.0

Training time. On the COCO dataset, we test the training time of our method that trains the backbone model (HrHRNet-W32 [6]) with the loss function in Eq. 22, and compare it with the training time of the baseline that trains the same network with the overall L2 loss. As shown in Tab. 8, though our method achieves much better performance, it brings only very little increase of the training time. Note that as we follow the same evaluation procedure of previous works [6, 22], the testing time with and without our proposed method are the same.

Qualitative results. Some qualitative results are shown in Fig. 2. As shown, the baseline method which uses the overall L2 loss to optimize the heatmap prediction can miss or

Table 8. Comparison of the training time.

MethodTraining time per epochPerformance(AP)
Baseline(HrHRNet-W32)1.11h67.1
Baseline + Ours1.19h69.9

get inaccurate body joints in some sub-regions of the predicted heatmap (see the sub-regions framed with dashed lines). In contrast, our method locates body joints of different people in different sub-regions of the predicted heatmap more accurately at the same time, demonstrating the effectiveness of our method.

5. Conclusion

In this paper, we have proposed a novel bottom-up human pose estimation method that optimizes the heatmap prediction via minimizing the distance between two characteristic functions respectively constructed from the predicted and GT heatmaps. We theoretically analyze that the distance between the two characteristic functions is the upper bound of the L2 losses w.r.t. sub-regions of the predicted heatmap. Thus, via minimizing the distance between the two characteristic functions, our method locates body joints in different sub-regions of the predicted heatmap more accurately at the same time. Our method achieves superior performance on the COCO dataset and the CrowdPose dataset. Besides, our method could potentially also be applied in other tasks such as multi-object 6D pose estimation [1], facial landmark extraction [3], and fingerprint minutiae detection [10]. We leave this as our future work.

Acknowledgement. This work is supported by MOE AcRF Tier 2 (Proposal ID: T2EP20222-0035), National Research Foundation Singapore under its AI Singapore Programme (AISG-100E-2020-065), and SUTD SKI Project (SKI 2021_02_06).

References

[1] Arash Amini, Arul Selvam Periyasamy, and Sven Behnke. Yolopose: Transformer-based multi-object 6d pose estimation using keypoint regression. In Intelligent Autonomous Systems 17: Proceedings of the 17th International Conference IAS-17, pages 392–406. Springer, 2023. 8
[2] Abdul Fatir Ansari, Jonathan Scarlett, and Harold Soh. A characteristic function approach to deep implicit generative modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7478-7487, 2020. 3, 4, 5
[3] Matteo Bodini. A review of facial landmark extraction in 2d images and videos using deep learning. Big Data and Cognitive Computing, 3(1):14, 2019. 8
[4] Guillem BrasΓ³, Nikita Kister, and Laura Leal-Taixe. The center of attention: Center-keypoint grouping via attention for multi-person pose estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11853-11863, 2021. 5, 6, 7
[5] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7291–7299, 2017. 1, 2, 3, 5, 6, 7
[6] Bowen Cheng, Bin Xiao, Jingdong Wang, Honghui Shi, Thomas S Huang, and Lei Zhang. Higherhrnet: Scale-aware representation learning for bottom-up human pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5386-5395, 2020. 1, 2, 5, 6, 7, 8
[7] Ke Cheng, Yifan Zhang, Xiangyu He, Weihan Chen, Jian Cheng, and Hanqing Lu. Skeleton-based action recognition with shift graph convolutional network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 183-192, 2020. 1
[8] Kacper P Chwialkowski, Aaditya Ramdas, Dino Sejdinovic, and Arthur Gretton. Fast two-sample testing with analytic representations of probability measures. Advances in Neural Information Processing Systems, 28, 2015. 3
[9] TW Epps and Kenneth J Singleton. An omnibus test for the two-sample problem using the empirical characteristic function. Journal of Statistical Computation and Simulation, 26(3-4):177-203, 1986. 3
[10] Yulin Feng and Ajay Kumar. Detecting locally, patching globally: An end-to-end framework for high speed and accurate detection of fingerprint minutiae. IEEE Transactions on Information Forensics and Security, 2023. 8
[11] Zigang Geng, Ke Sun, Bin Xiao, Zhaoxiang Zhang, and Jingdong Wang. Bottom-up human pose estimation via disentangled keypoint regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14676-14686, 2021. 3, 5, 6, 7
[12] Kerui Gu, Linlin Yang, and Angela Yao. Removing the bias of integral pose regression. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11067-11076, 2021. 1

[13] CE Heathcote. A test of goodness of fit for symmetric random variables1. Australian Journal of Statistics, 14(2):172-181, 1972. 3
[14] Eldar Insafutdinov, Leonid Pishchulin, Bjoern Andres, Mykhaylo Andriluka, and Bernt Schiele. Deepercut: A deeper, stronger, and faster multi-person pose estimation model. In European conference on computer vision, pages 34-50. Springer, 2016. 3
[15] Wentao Jiang, Sheng Jin, Wentao Liu, Chen Qian, Ping Luo, and Si Liu. Posetrans: A simple yet effective pose transformation augmentation for human pose estimation. arXiv preprint arXiv:2208.07755, 2022. 5, 6
[16] Sheng Jin, Wentao Liu, Enze Xie, Wenhai Wang, Chen Qian, Wanli Ouyang, and Ping Luo. Differentiable hierarchical graph grouping for multi-person pose estimation. In European Conference on Computer Vision, pages 718-734. Springer, 2020. 1, 2, 3, 5, 6
[17] Sven Kreiss, Lorenzo Bertoni, and Alexandre Alahi. Pifpaf: Composite fields for human pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11977-11986, 2019. 1, 2, 3, 5, 6
[18] Jia Li, Wen Su, and Zengfu Wang. Simple pose: Rethinking and improving a bottom-up approach for multi-person pose estimation. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 11354-11361, 2020. 6
[19] Jiefeng Li, Can Wang, Hao Zhu, Yihuan Mao, Hao-Shu Fang, and Cewu Lu. Crowdpose: Efficient crowded scenes pose estimation and a new benchmark. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10863-10872, 2019. 2, 6, 7
[20] Shengxi Li, Zeyang Yu, Min Xiang, and Danilo Mandic. Reciprocal adversarial learning via characteristic functions. Advances in Neural Information Processing Systems, 33:217-228, 2020. 3
[21] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr DΓ³lar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014. 2, 6
[22] Zhengxiong Luo, Zhicheng Wang, Yan Huang, Liang Wang, Tieniu Tan, and Erjin Zhou. Rethinking the heatmap regression for bottom-up human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13264-13273, 2021. 1, 2, 5, 6, 7, 8
[23] Alejandro Newell, Zhiao Huang, and Jia Deng. Associative embedding: End-to-end learning for joint detection and grouping. Advances in neural information processing systems, 30, 2017. 1, 2, 3, 6
[24] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In European conference on computer vision, pages 483-499. Springer, 2016. 1, 2, 6
[25] Xuecheng Nie, Jiashi Feng, Jianfeng Zhang, and Shuicheng Yan. Single-stage multi-person pose machines. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6951-6960, 2019. 1, 2, 3, 6

[26] George Papandreou, Tyler Zhu, Liang-Chieh Chen, Spyros Gidaris, Jonathan Tompson, and Kevin Murphy. Personlab: Person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model. In Proceedings of the European conference on computer vision (ECCV), pages 269-286, 2018. 1, 2, 3, 5, 6
[27] Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, Peter V Gehler, and Bernt Schiele. Deepcut: Joint subset partition and labeling for multi person pose estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4929-4937, 2016. 3
[28] Xuelin Qian, Yanwei Fu, Tao Xiang, Wenxuan Wang, Jie Qiu, Yang Wu, Yu-Gang Jiang, and Xiangyang Xue. Pose-normalized image generation for person re-identification. In Proceedings of the European conference on computer vision (ECCV), pages 650–667, 2018. 1
[29] Haoxuan Qu, Li Xu, Yujun Cai, Lin Geng Foo, and Jun Liu. Heatmap distribution matching for human pose estimation. In Advances in Neural Information Processing Systems. 2
[30] Dahu Shi, Xing Wei, Liangqi Li, Ye Ren, and Wenming Tan. End-to-end multi-person pose estimation with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11069-11078, 2022. 5, 6, 7
[31] Weibo Shu, Jia Wan, Kay Chen Tan, Sam Kwong, and Antoni B Chan. Crowd counting in the frequency domain. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19618-19627, 2022. 3
[32] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5693-5703, 2019. 1, 2
[33] Jonathan J. Thompson, Arjun Jain, Yann LeCun, and Christoph Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. Advances in neural information processing systems, 27, 2014. 1, 2
[34] Ali Varamesh and Tinne Tuytelaars. Mixture dense regression for object detection and human pose estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13083-13092. IEEE, 2020. 6
[35] Bo Wan, Desen Zhou, Yongfei Liu, Rongjie Li, and Xuming He. Pose-aware multi-level feature network for human object interaction detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9469-9478, 2019. 1
[36] Dongkai Wang and Shiliang Zhang. Contextual instance decoupling for robust multi-person pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11060-11068, 2022. 5, 6, 7
[37] Dongkai Wang, Shiliang Zhang, and Gang Hua. Robust pose estimation in crowded scenes with direct pose-level inference. Advances in Neural Information Processing Systems, 34:6278-6289, 2021. 5, 6, 7
[38] Fangyun Wei, Xiao Sun, Hongyang Li, Jingdong Wang, and Stephen Lin. Point-set anchors for object detection, instance

segmentation and pose estimation. In European Conference on Computer Vision, pages 527-544. Springer, 2020. 5, 6
[39] Bin Xiao, Haiping Wu, and Yichen Wei. Simple baselines for human pose estimation and tracking. In Proceedings of the European conference on computer vision (ECCV), pages 466-481, 2018. 1, 2
[40] Yabo Xiao, Dongdong Yu, Xiao Juan Wang, Lei Jin, Guoli Wang, and Qian Zhang. Learning quality-aware representation for multi-person pose regression. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 2822-2830, 2022. 5, 6
[41] Jiangtao Xie, Fei Long, Jiaming Lv, Qilong Wang, and Peihua Li. Joint distribution matters: Deep brownian distance covariance for few-shot classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7972-7981, 2022. 3
[42] Nan Xue, Tianfu Wu, Gui-Song Xia, and Liangpei Zhang. Learning local-global contextual adaptation for multi-person pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13065-13074, 2022. 5, 6
[43] Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Thirty-second AAAI conference on artificial intelligence, 2018. 1
[44] Yuhui Yuan, Rao Fu, Lang Huang, Weihong Lin, Chao Zhang, Xilin Chen, and Jingdong Wang. Hrformer: High-resolution transformer for dense prediction. 2021. 1, 2
[45] Xingyi Zhou, Dequan Wang, and Philipp Krahenbuhl. Objects as points. arXiv preprint arXiv:1904.07850, 2019. 3