FigAgent / 2004.07788 /paper_text /intro_method.md
Eric03's picture
Add files using upload-large-folder tool
64dac1a verified

Method

We use the network of Newell et al. based on the implementation provided by Yang . We provide a diagram of the network in Figure 13{reference-type="ref" reference="fig:network"} and direct the user to the paper by Newell et al. for full details of the network components.

We use the stacked-hourglass network of Newell et al. . In our experiments a stack of two hourglasses is used.β€œConv” stands for convolution and β€œFC” for fully connected. For full details on the network implemented, we direct the user to the paper of Newell et al. .

3D Joint locations of the skeletons are defined in camera space $J_{3Dcam}$ and 2D joint locations, $J_{2Dfull}$ are their projected values in the synthetic image. Only images where all joints in $J_{2Dfull}$ are within the image bounds were included in the dataset. The images are shaped to fit the network inputs by following the steps outlined in Algorithm [al:imCrop]{reference-type="ref" reference="al:imCrop"}, producing images that are 256x256 pixels in size.

The bounding box of the transformed 256x256 image, and the bounding box of the original mask are used to calculate the scale and translation required to transform the dog in the 256x256 image back to its position in the original full-size RGBD image. $J_{2Dfull}$ are also transformed using Algorithm [al:imCrop]{reference-type="ref" reference="al:imCrop"}, producing $J_{2D256}$. Finally, the z-component in $J_{3Dcam}$ is added as the z-component in $J_{2D256}$, giving $J_{3D256}$. The x- and y- components of $J_{3D256}$ lie in the range [0,255]. We transform the z-component to lie in the same range by using Algorithm [al:normalisedDepth]{reference-type="ref" reference="al:normalisedDepth"}. In Algorithm [al:normalisedDepth]{reference-type="ref" reference="al:normalisedDepth"}, we make two assumptions:

  1. The root joint of the skeleton lies within a distance of 8 metres from the camera, the maximum distance detected by a Kinect v2

  2. Following Sun et al. , the remainder of the joints are defined as offsets from the root joint, normalised to lie within $\pm$ two metres. This is to allow the algorithm to scale to large animals such as horses, etc.

::: algorithm Calculate dog bounding box from binary mask Apply mask to RGBD image Crop the image to the bounding box Make the image square by adding rows/columns in a symmetric fashion Scale the image to be 256x256 pixels Add padding to the image bringing the size to 293x293 pixels and rescale the image to 256x256 pixels :::

::: algorithm $rootJoint = J_{3D256}[0]$ :::

We use a Hierarchical Gaussian Process Latent Variable Model (H-GPLVM) to represent high-dimensional skeleton motions lying in a lower-dimensional latent space. Figure 14{reference-type="ref" reference="fig:hgpvlm"} shows how the structure of the H-GPLVM relates to the structure of the dog skeleton: The latent variable representing the fully body controls the tail, legs, spine, and head variables, while the four legs are further decomposed into individual limbs. Equation [e:jointdistr]{reference-type="ref" reference="e:jointdistr"} shows the corresponding joint distribution.

The structure of our H-GPLVM. Each node Xi produces joint rotations (and translation, if applicable) Yi for the bones with the corresponding colour.

p(Y1,Y2,Y3,Y4,Y5,Y6,Y7)=∫P(Y1∣X1)β€¦Γ—βˆ«p(Y2∣X2)β€¦Γ—βˆ«p(Y3∣X3)β€¦Γ—βˆ«p(Y4∣X4)β€¦Γ—βˆ«p(Y5∣X5)β€¦Γ—βˆ«p(Y6∣X6)β€¦Γ—βˆ«p(Y7∣X7)β€¦Γ—βˆ«p(X2,X3,X4,X5∣X8)β€¦Γ—βˆ«p(X1,X8,X6,X7∣X9)dX9…dX1,\addtocounterequation1\labele:jointdistr\begin{align*} p(Y_1, Y_2,& Y_3, Y_4, Y_5, Y_6, Y_7) =\\ &\int P(Y_1 \vert X_1)\ldots \\ &\times \int p(Y_2 \vert X_2)\ldots \\ &\times \int p(Y_3 \vert X_3)\ldots \\ &\times \int p(Y_4 \vert X_4)\ldots \\ &\times \int p(Y_5 \vert X_5)\ldots \\ &\times \int p(Y_6 \vert X_6)\ldots \\ &\times \int p(Y_7 \vert X_7)\ldots \\ &\times \int p(X_2, X_3, X_4, X_5 \vert X_8)\ldots \\ &\times \int p(X_1, X_8, X_6, X_7 \vert X_9) d X_9 \ldots d X_1,\addtocounter{equation}{1}\tag{\theequation}%\label{eqn} \label{e:jointdistr} \end{align*} where $Y_1$ to $Y_7$ are the rotations (and translations, if applicable) of the joints in the tail, back left leg, front left leg, back right leg, front left leg, spine and head respectively and $X_1$ to $X_7$ are the nodes in the model for each respective body part, $X_8$ is the node for all four legs, and $X_9$ is the root node.

Let Y be the motion data matrix of $f$ frames and dimension $d$, $\mathbb{R}^{f\times d}$, containing the data of $Y_1$ to $Y_7$. $K_{x_i}$ is the radial basis function that depends on the $q$-dimensional latent variables $X_i$ that correspond to $Y_i$. [$s_i$, $e_{i}$] define the start and end index of columns in $Y$ that contain the data for $Y_i$. $N$ is the normal distribution. Then, $$\begin{equation} p(Y_i \vert X_i) = \prod_{j=s_i}^{e_i} N ({Y_i}{[:,j]}|0,K{x_i}), \end{equation}$$ where ${Y_i}_{[:,j]}$ denotes the $j$-th column of $Y_i$.

When fitting the H-GPLVM to the network-predicted joints, each of these joints has an associated weight to guide fitting. This is a elementwise-multiplication of two sets of weights, $W_1$ and $W_2$. $W_1$ is user-defined and inspired by the weights used by the Vicon software. Specifically, these are [5,5,5,0.8,0.5,0.8,1,1,1,0.8,0.5,0.8,1,1,1, 0.8,0.5,0.5,0.8,1,1,0.1,0,0.1,0,0.8,1,1,1,1,0.8,1,1,1,1,1,1,1,1, 1,1,1,1]. This has the effect of giving the root and spine the highest weight (5), the end of each limb has a higher weight (1) than the base of the limb (0.8). Each joint in the tail is given equal weights (1). As the ears were not included in the model, a weight of 0 was given to the ear tips, and 0.1 given to the base of the ears, in order to slightly influence head rotation.

Prior to the fitting stage, the shape and size of the dog skeleton has either been provided by the user or generated by the PCA shape model. The bone lengths $L$ of this skeleton can be calculated. For the current frame, we calculate the length of the bones in the skeleton as predicted by the network, $L_{N}$. The deviation from $L$ is then calculated as $abs(L - L_{N})/L$. $W_2$ is calculated as the inverse of this deviation, capped to be within the range [0,1].