Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 0
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 74386)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}
6D pose estimation predicts a rigid transformation (i.e., 3D rotation and translation) from the 3D coordinate system of the object to the 3D coordinate system of the camera. Accurate 6D pose enables a robot to interact with target objects in the environment effectively~\cite{choi2012voting,du2021vision}.
Although recent years have witnessed a spurt of progress in 6D object pose estimation, it is still challenging and remains many open problems to solve, due to the heavy occlusions, cluttered backgrounds, and varying illuminations in the environment.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{showso3.pdf}
\caption{\PHR{
Visualization of the SO(3) property. The red dots represent keypoints and the yellow dot is one of the keypoints. The black dot is an appearance point on the object. The semantic label (black dot) is invariant and the keypoint offset direction (the orange arrow between the black dot and the yellow dot) is variant when the object rotates.}
}
\label{fig:so3}
\end{figure}
\begin{figure*}[htbp]
\centering
\includegraphics[width=1.0\linewidth]{duckvis.pdf}
\caption{Comparison of our SO(3)-Pose with FFB6D \cite{he2021ffb6d} and SS-Conv \cite{lin2021sparse} for pose estimation on the LineMOD dataset. Our SO(3)-Pose predicts a more accurate 6D object (see the Duck) pose that best fits the duck in the image than the state-of-the-arts.
Please note that, the object vertices in the object coordinate system are transformed by the predicted poses from individual methods to the camera coordinate system and then projected to the image with the camera intrinsic matrix. }
\label{duckvis}
\end{figure*}
Recent methods mainly depend on learning-based techniques (e.g., CNN)~\cite{peng2020pvnet} for 6D pose estimation. That is, at the training stage, cutting-edging models learn from 3D objects and images that contain the objects in known 6D poses; and at the test stage, given a list of object instances and an image with the objects visible in it, the trained models infer 6D poses of the listed object instances. Initially, the inputs for these learning-based models are only RGB images. When encountering the cases of 1) similar, occluded or texture-free objects, 2) poor lighting conditions, and 3) low-contrast scenes, these models become difficult to learn distinctive representations from RGB images, due to
lack of the geometry information.
The recently popular 3D sensors, such as Microsoft
Kinect, Velodyne LiDAR, Intel RealSense, LiDAR scanner of Apple iPad Pro, can capture the real world into RGB-D images. The extra depth (D) information is promising to alleviate the pose estimation problem.
Given an RGB-D image, how to fully benefit from the two cross-modalities for better 6D pose estimation is still an open problem. The common way of handling the cross-modalities is to extract appearance features and geometry features separately by two-stream networks, e.g., DenseFusion~\cite{ChenWang2019DenseFusion6O}, and PVN3D~\cite{he2020pvn3d}. The appearance and geometry features from the two steams are fused and assigned to each pixel to realize the pose estimation.
However, since the two streams seldom interact with each other to obtain the mutual gain, their performance surfers from degeneration in cases of e.g., objects with similar appearance or with reflective surfaces. FFB6D~\cite{he2021ffb6d} pioneers to communicate between the two streams by constructing bidirectional fusion modules.
Thus, the two streams are encouraged to share the local-and-global complementary information from each stream for learning appearance and geometry representations.
Although FFB6D achieves the state-of-the-art performance, the extracted features are still simply fused to learn the semantic segmentation and keypoint offset, without exploring geometry domain knowledge.
Furthermore, since the full flow bidirectional fusion operation is applied on each encoding and decoding layer of the two networks, the excessive interaction of the appearance and geometry features leads to colossal time consumption, which hinders its capability on real-time applications.
By contrast, our work delves into the geometric properties from the perspective of rigid transformations and turns to learn SO(3) equivalence and invariance instead of conventional geometric features, and therefore to estimate the 6D object pose effectively. Herein, SO(3)-equivariance/invariance is defined as a characteristic for feature mapping that the input and output have equivalent/invariant transformation effects with a given instance in the manifold space.
In this work, we propose SO(3)-Pose, a new representation learning network to explore SO(3)-equivariant and SO(3)-invariant features from point clouds for \PHR{instance-level object pose estimation which needs object mesh.}
SO(3)-Pose adopts a two-stage strategy: 1) jointly segment the target objects from the RGB image with the help of SO(3)-invariant features and regress the keypoints of objects from the depth image with the help of SO(3)-equivariant features; and 2) given the keypoints, a PnP optimization problem is solved to produce the pose parameters.
From our observation, the semantic label is SO(3)-invariant and the keypoint offset is SO(3)-equivariant (see Fig. \ref{fig:so3}).
To this end, we design an SO(3)-equivariant encoder to extract SO(3)-equivariant features from the point cloud, and meanwhile develop an equivariant-to-invariant layer (E2Ilayer) to make the SO(3)-equivariant features be the SO(3)-invariant features. The SO(3)-equivariant features and the RGB features are aggregated to localize keypoints, while the SO(3)-invariant features and the RGB features are aggregated to segment object instances.
Our method achieves the state-of-the-art performance (see Fig. \ref{duckvis}). In summary, our contributions are as follows:
\begin{itemize}
\item[1)] We propose a novel 6D object pose estimation network, which introduces SO(3)-equivariance for representation learning. To the best of our knowledge, this is the first work to introduce SO(3)-equivariance to pose estimation.
\item[2)] We design a new module termed E2Ilayer, which effectively converts the SO(3)-equivariant features to the SO(3)-invariant features; we also design a novel loss, dubbed SO3loss, to guide the SO(3)-equivariance learning
\item[3)] We show the SO(3)-equivariance benefits both tasks of semantic segmentation and keypoint detection by developing individual feature fusion modules.
\end{itemize}
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{overview.pdf}
\caption{Overview of our SO(3)-Pose. SO(3)-equivariance is a geometric property in that a feature rotates the same degree and direction as the object rotates.
The SO(3)-invariant feature $F_{inv}$ is obtained from the SO(3)-equivariant feature $F_{equi}$ through the designed E2Ilayer. $F_{inv}$ and $F_{rgb}$ are fused and fed into a semantic segmentation module. $F_{equi}$ and $F_{rgb}$ are fused and fed into a 3D keypoint detection module to obtain per-object keypoints. Finally, the 6D pose parameters are fitted with the 3D-3D correspondence of keypoints. }
\label{fig:overview}
\end{figure*}
\section{Related Work}
\subsection{Traditional approaches}
In the traditional setting, methods focus on holistic and local object shape representation learning. These methods can be roughly divided into two groups, template-based and descriptor-based, which are distinguished by how to utilize feature embedding and clustering for pose estimation.
The first kind of method mainly aims at computing holistic shape description for each target model~\cite{du2019vision,hinterstoisser2012model,hodavn2015detection}. Specifically, the key technique of this type of method is template matching. These templates are generated by projecting a 3D model onto different image planes from various viewpoints, and each template has a pose parameter. In the inference phase, the final object pose will be recovered by the correlation coefficient between the query window and the template~\cite{zhang2017texture}.
As an early attempt for this task, Hinterstoisser et al.~\cite{hinterstoisser2012model} proposed a classical framework, which integrates image gradients and surface normals for robust feature embedding. This scheme achieves distinguished holistic shape representation and significant detection performance.
Rios-Cabrera et al.~\cite{rios2013discriminatively} devised a real-time scalable approach based on LINE2D/LINEMOD~\cite{hinterstoisser2011gradient}. They also designed a novel strategy to distinguish templates by employing a cluster manner.
This kind of method works well with texture-less objects, but the detection performance will be severely degraded when a high occlusion exists in real scenes.
The second group of methods performs either 2D-3D or 3D-3D matching to establish correspondences in feature space, and the final pose parameters can be recovered by solving the PnP problem~\cite{lepetit2009epnp}.
The core insight of this class of methods is to extract robust feature points both on the image plane and 3D geometry surface~\cite{jiang2021review,ma2021image,hana2018comprehensive,li2021tutorial}, and employs feature description on each keypoint to obtain a series of matching points with feature similarity between them.
One of the most representative approaches was introduced by Mur-Artal et al.~\cite{mur2015orb}, which proposed a novel 2D descriptor called ORB. The ORB features are firstly extracted on the given image and the pose parameters of keyframes are calculated by the constructed correspondences from the 2D pixels to 3D points.
Correspondingly, when depth data is available, the 3D descriptors~\cite{salti2014shot,huang2021comprehensive,drost2010model,hinterstoisser2016going,vidal2018method,zhou2020bold3d} can be built on the model surfaces and the correspondence matrix will be filled with the matched 3D points. The pose estimation task turns into a rigid registration problem. Once the correspondence is established, the final pose can be solved by utilizing the SVD algorithm.
In general, descriptor-based approaches have advantages in terms of computation complexity and robustness to partial occlusion, since only the local structural features of the model are highlighted and pushed into downstream calculation stages. However, due to the locality injection property of these methods, the proposed 3D descriptors are sensitive to the variation of the environment, e.g., illumination.
\subsection{Learning-based approaches}
Recently, with the prevailing attention of deep learning techniques, learning-based approaches have been deployed into many vision fields and reveal remarkable progress such as image classification~\cite{ali2021xcit}, object detection~\cite{dai2021up}, semantic segmentation~\cite{wang2021max} etc.
Similarly, learning-based methods can be roughly divided into two categories, i.e., holistic and semi-holistic approaches.
Holistic approaches regress the 3D translation and orientation components of the target object directly from the given input data~\cite{kehl2017ssd,shi2021stablepose}.
Xiang et al.~\cite{YuXiang2017PoseCNNAC} introduced a novel PoseCnn framework, which estimated the 3D translation by combining the center location of the target object in the image with the camera parameters. The 3D rotation is regressed from the neural network directly in a quaternion form by utilizing the position and the semantic label of the object.
Li et al.~\cite{li2018deepim} presented a new scheme to regress the 6D pose in an iterative manner, which iteratively refine the pose from initial pose parameters by matching the test image against the rendered image.
Sundermeyer et al.~\cite{sundermeyer2018implicit} proposed a real-time method to learn an implicit representation of the target, and leveraged an augmented auto-encoder to regress the 6D pose from the latent space.
Wang et al.~\cite{ChenWang2019DenseFusion6O} designed a novel multi-modal feature fusion framework, which utilizes two feature embedding branches to respectively extract the color and the geometry cues, and employs a pixel-wise feature fusion mechanism to achieve deep interaction of the multi-modality data.
This line of work can achieve satisfactory performance in several benchmarks and be capable to implement end-to-end network architecture directly. However, the generalization and learnability of these methods are restricted by the non-linearity of rotation space.
Current semi-holistic approaches first extract the keypoints on the surface of the target model, and then utilize a PnP algorithm or Least-Squares Fitting~\cite{gupta2019cullnet,rad2017bb8,tekin2018real}.
Hu et al.~\cite{hu2019segmentation} presented a two-stream segmentation-driven framework, which utilizes each visible cell assigned by the segmentation stream to predict the 2D keypoints locations of the corresponding object.
Peng et al.~\cite{peng2020pvnet} proposed a novel pixel-wise voting method to predict 2D keypoints of the target object in a given image. The dense keypoints locations are predicted by regressing pixel-wise vectors pointing to the keypoints.
He et al.~\cite{he2021ffb6d} introduced a new bidirectional fusion network, which achieves RGB-D feature fusion effectively and employs the keypoints regression network branch proposed by~\cite{he2020pvn3d} to obtain 3D keyponts.
Keypoint-based methods perform well in occlusion scenarios. However, the detection performance will be severely degraded for texture-less objects when only the RGB images can be used.
\subsection{Equivariant and Invariant Representation Learning}
Recently, equivariance and invariance, as essential properties of point cloud processing in the 3D vision field, have received extensive attention.
Lin et al.~\cite{lin2021sparse} presented a novel convolution SS-Conv for efficient learning of SE(3)-equivariant features in the point cloud, which designs a sparse steerable kernel based on spherical harmonics function.
Li et al.~\cite{li2021leveraging} proposed a new SE(3)-equivariant point cloud network for category-level object pose estimation, which decouples the pose and object geometry shape by deploying an SE(3)-invariant shape reconstruction module and an SE(3)-equivariant pose estimation module.
Boulevard et al.~\cite{poulenard2019effective} introduced a novel SO(3)-invariance architecture to process point cloud directly, which employs a spherical harmonics-based kernel at different layers of the network to inject invariant features at the global and local levels.
Deng et al.~\cite{deng2021vector} introduced a general framework for SO(3)-equivariant feature extraction in the point cloud, which design a vector neurons representation in feature space.
\section{Method}
Our goal is to estimate the 6D pose parameters of a set of known objects from an RGB-D image. The 6D pose parameters can be represented by a rigid transformation $T\in SE(3)$ from the object coordinate system to the camera coordinate system, which consists of a 3D rotation matrix $R\in SO(3)$ and a 3D translation vector $t\in \mathbb{R}^3$.
Given a depth image $D \in \mathbb{R}^{W\times H}$ where $W$ and $H$ denote the image width and height, and the camera intrinsic matrix $K\in \mathbb{R}^{3 \times 3}$, we can acquire the corresponding point cloud $P\in \mathbb{R}^{(W\times H)\times 3}$ by back-projecting homogeneous pixels $I\in \mathbb{R}^{(W\times H)\times 3} $ in 2D field to 3D space:
\begin{equation}
P = D(x,y)I(K^{-1})^T.
\end{equation}
\subsection{Overview}
At the top level, we show our SO(3)-Pose in Fig. \ref{fig:overview}.
SO(3)-Pose follows the high-performing keypoint-based two-stage strategy~\cite{he2020pvn3d,he2021ffb6d}.
A 2D encoder and an SO(3)-equivariant encoder are utilized for representation learning of the
RGB image and point cloud (converted from the depth image), respectively. For better appearance and geometry representation learning, SO(3)-Pose not only absorbs the geometry knowledge of SO(3)-equivariance from the point cloud, but also realizes the cross-modal information communication. By the communication, the SO(3)-invariant features facilitate to learn more distinctive representations for segmenting objects with similar appearance from RGB channels; the SO(3)-equivariant features bridge RGB features to deduce the (missed) geometry for detecting keypoints of an object with the reflective surface from the depth channel. Finally, it utilizes the 3D-3D correspondence of keypoints to fit the 6D pose parameters.
\subsection{SO(3)-equivariant Layer}
\label{equi}
SO(3) can be interpreted as a $3\times3$ rotation matrix.
A feature $V$ is SO(3)-equivariant if and only if it satisfies:
\begin{equation}
\PHR{f(VR) = f(V)R,}
\label{so3equa}
\end{equation}
where $R\in SO(3)$ is a rotation matrix, ${\Large f}$ is a mapping function to represent a layer operation, and $V=\{v _{i} \in \mathbb{R}^{C \times 3}, i=1,2,3,...\}$ ($C$ is the feature dimension). Inspired by VNN \cite{deng2021vector}, our method learns the SO(3)-equivariance properties as follows.
Learning the rotation equivariance should build a series of basic SO(3)-equivariant layers containing the linear layers, non-linear layers, pooling layers, and normalization layers. These layers all satisfy the SO(3)-equivariance property according to the definition.
Given a weight matrix $\mathbf{W} \in \mathbb{R}^{C^{\prime} \times C}$, we define a linear operation $f_{\operatorname{lin}}(\cdot ; \mathbf{W})$ acting on a vector-list features $\mathbf{V} \in \mathbb{R}^{N\times C \times 3}$ as:
\begin{equation}
\boldsymbol{V}^{\prime}=f_{\operatorname{lin}}(\boldsymbol{V} ; \mathbf{W})=\mathbf{W} \boldsymbol{V} \in \mathbb{R}^{C^{\prime} \times 3}.
\end{equation}
We verify that if the input rotates by $R\in SO(3)$, the output also rotates by the same matrix:
\begin{equation}
\PHR{
f_{\operatorname{lin}}(\boldsymbol{V}R ; \mathbf{W})= \mathbf{W} \boldsymbol{V}R = f_{\operatorname{lin}}(\boldsymbol{V} ; \mathbf{W}) R= \boldsymbol{V}^{\prime}R,
}
\end{equation}
yielding the desired equivariance property.
SO(3) has the closure property of a group. That means, the product of a rotation matrix multiplied by any rotation matrix is also a rotation matrix.
If the final output is SO(3)-equivariant with the input across a neural network, any intermediate output also needs to satisfy the SO(3)-equivariance with the input. Thus, it is necessary to construct a special SO(3)-equivariance layer to achieve the same functions as the non-linear layers, pooling layers and normalization layers.
Non-linearity plays an important role in the representation learning of neural networks. The non-linear layers (e.g., ReLU, leaky-ReLU) split a feature space into two half-spaces: the positive half-space keeps its original feature and the negative half-space is muted or reduced by multiplication with a small weight.
To keep SO(3)-equivariance, we dynamically predict a direction from the input vector-list feature and then truncate the portion of a vector that points into the negative half-space of the learned direction:
\begin{equation}
\boldsymbol{v}^{\prime}=\left\{\begin{array}{ll}
\boldsymbol{q} & \text { if }\langle\boldsymbol{q}, \boldsymbol{k}\rangle \geqslant 0 \\
\boldsymbol{q}-\left\langle\boldsymbol{q}, \frac{\boldsymbol{k}}{\|\boldsymbol{k}\|}\right\rangle \frac{\boldsymbol{k}}{\|\boldsymbol{k}\|} & \text { otherwise }
\end{array}\right.
\end{equation}
where $\boldsymbol{q}=\mathbf{W} \boldsymbol{V}$, $\boldsymbol{k}=\mathbf{U} \boldsymbol{V}$, $\mathbf{W} \in \mathbb{R}^{ 1 \times C}$, $\mathbf{U} \in \mathbb{R}^{1 \times C}$ and $\boldsymbol{V} \in \mathbb{R}^{C \times 3}$.
The pooling layer aims to downsample the useful feature. It also acts as a symmetrical function in the point cloud networks to solve the permutation problem of the point cloud. There are usually two main solutions (i.e., max-pooling and mean-pooling) used in most of the neural networks.
According to the definition (Eq. \ref{so3equa}), it is satisfied with the average pooling operation.
We adopt the mean pooling to aggregate all feature information.
The normalization layer influences the convergence efficiency in the training process. Layer normalization \cite{JimmyBa2016LayerN} and instance normalization \cite{DmitryUlyanov2016InstanceNT} only change the distributions in a sample of a batch with respect to a rotation. Batch normalization \cite{SergeyIoffe2015BatchNA} aggregates statistics across all batch samples with respect to several rotations. Averaging across arbitrarily rotated inputs would not necessarily be useful. For example, averaging two input features rotated in opposite directions would be zero instead of producing that feature into a canonical pose. Thus, the operation of batch normalization is required to be specially designed, which cannot break the SO(3)-equivariance of features. We follow the design of VN-Batchnorm \cite{deng2021vector} that is formulated as:
\begin{equation}
\boldsymbol{N}_{b}=\text { ElementwiseNorm }\left(\boldsymbol{V}_{b}\right) \in \mathbb{R}^{N \times 1},
\end{equation}
\begin{equation}
\left\{\boldsymbol{N}_{b}^{\prime}\right\}_{b=1}^{B}=\text { BatchNorm }\left(\left\{\boldsymbol{N}_{b}\right\}_{b=1}^{B}\right),
\end{equation}
\begin{equation}
\boldsymbol{V}_{b}^{\prime}[c]=\boldsymbol{V}_{b}[c] \frac{\boldsymbol{N}_{b}^{\prime}[c]}{\boldsymbol{N}_{b}[c]}, \quad \forall c \in[C],
\end{equation}
where $\left\{\boldsymbol{V}_{b}\right\}_{b=1}^{B}$ are a batch of $B$ vector-list features $\boldsymbol{V}_{b}\in\mathbb{R}^{C \times 3}$, $\boldsymbol{V}_{b}^{\prime}[c]$, $\boldsymbol{V}_{b}[c]$ are the vector channels, $\boldsymbol{V}_{b}^{\prime}[c]$, $\boldsymbol{V}_{b}[c]$ are their scalar 2-norms, and $\text { ElementwiseNorm }\left(\boldsymbol{V}_{b}\right)$ computes the 2-norm of every vector channel $\boldsymbol{v}_{c}=\boldsymbol{V}_{b}[c] \in \boldsymbol{V}_{b}.$
\begin{figure}[t]
\centering
\includegraphics[height=0.5\linewidth]{E2Ilayer.pdf}
\caption{The structure of E2Ilayer. Red box indicates the SO(3)-equivariant linear layer introduced in Section \ref{equi}. }
\label{fig:E2Ilayer}
\end{figure}
\subsection{SO(3)-invariant Layer}
SO(3)-invariant property is crucial for many tasks such as classification and segmentation, where the label of an object or its parts should be invariant to the object's pose.
We define an SO(3)-equivariant feature as:
\begin{equation}
f(VR) = f(V).
\end{equation}
The rotation matrix is an orthonormal matrix; its inverse matrix is equal to its transpose matrix.
We can compute an SO(3)-invariant feature from two SO(3)-equivariant features:
\begin{equation}
(\boldsymbol{V} R)(\boldsymbol{T}R)^{\top}=\boldsymbol{V} R R^{\top} \boldsymbol{T}^{\top} =\boldsymbol{V} \boldsymbol{T}^{\top}.
\end{equation}
We propose an E2ILayer (an Equivariant-to-Invariant layer) to learn the SO(3)-invariant feature as illustrated in Fig. \ref{fig:E2Ilayer}.
We attain the SO(3)-invariant features to map the feature space onto the semantic label space.
We design a series of linear layers to increase the feature expression ability.
\label{inva}
\subsection{Feature Fusion}
Feature fusion is used to map the SO(3)-equivariant features to the keypoint offset space and map the SO(3)-invariant features to the semantic label space.
From our observation, the semantic label is invariant when the object rotates, and the relative orientation is changed along with the rotation. From this discovery, we extract the SO(3)-invariant and SO(3)-equivariant features from the point cloud.
We use a general 2D extractor from RGB images to contain local and global appearance features, then we fuse the appearance and SO(3)-equivariant features to map the feature space onto the keypoint offset space.
We fuse the appearance features and SO(3)-invariant features to map the feature space onto the semantic space. Specifically, we first concatenate the geometric feature and appearance feature, and feed the fused features to MLP to generate the mixed feature space for final prediction.
To avoid overfitting, we only use two linear layers.
\label{fusi}
\subsection{6D Object Pose Estimation}
Once each point is assigned with a semantic label and an offset relative to the keypoints, we can start the second stage of pose estimation.
In detail, we first attain the object instance according to the per-point label and center point which is one of the predicted keypoints.
Next, we cluster the keypoints to vote for certain keypoints corresponding to the keypoints selected from the object models. Finally, we utilize a least-squares fitting algorithm based on the 3D-3D corresponding relation to compute the pose parameters.
Let $L_{\text {seg}}$ and $L_{\text {kp}}$ be the two losses that are imposed on the segmentation branch and keypoint detection branch.
For the special center point, we additionally use $L_{\text {center}}$. $L_{\text {seg}}$ is a focal loss \cite{TsungYiLin2017FocalLF}.
$L_{\text {kp}}$ and $L_{\text {center}}$ are $L_1$ distance losses following PVN3D \cite{he2020pvn3d}.
Features extracted from the SO(3)-equivariant layers have the transitive dependencies property,
in that each former feature rotates by $R$ is equivalent to all latter features rotated by the same matrix $R$.
In order to improve the expression of the SO(3)-equivariant property, we propose a new loss function, dubbed SO3Loss, for the SO(3)-equivariant layers as:
\begin{equation}
L_{\text {so3}}= \left|f(V)-f(VR)R^{\mbox{-}1}\right|,
\end{equation}
where $f$ represents a mapping through a series of SO(3)-equivariant layers, $R$ is any rotation matrix, and $V$ is a former feature. We jointly optimize the detection task and the auxiliary tasks by applying a gradient descent method to minimize the weighted sum of the following losses:
\begin{equation}
L_{\text {all }}=\lambda_{1} L_{\text {seg }}+\lambda_{2} L_{\text {kp }}+\lambda_{3} L_{\text {center }}+\lambda_{4} L_{\text {so3 }},
\end{equation}
where $\lambda_{1}$, $\lambda_{2}$, $\lambda_{3}$ and $\lambda_{4}$ are the weights for each task.
\label{so3loss}
\section{Experiments}
\subsection{DataSet}
We evaluate our method on two benchmark datasets, including the YCB-Video dataset and the LineMOD dataset.
\textbf{YCB-Video dataset.} YCB-Video \cite{YuXiang2017PoseCNNAC} consists of 21 YCB objects \cite{BerkCalli2015TheYO} in 92 RGB-D videos. All the subsets of the objects that appeared in the scene are annotated with 6D pose and instance-level masks. We follow the previous works \cite{YuXiang2017PoseCNNAC, ChenWang2019DenseFusion6O, he2020pvn3d} to split the training set and the testing set.
We also take the synthetic images for training as per \cite{YuXiang2017PoseCNNAC} and apply the hole completion algorithm to fill the depth images as per \cite{JasonKu2018InDO}.
\textbf{LineMOD dataset.} LineMOD \cite{StefanHinterstoisser2011MultimodalTF} contains 13 low-textured objects in 13 videos, with annotated 6D pose and instance-semantic masks. The varying lighting, texture-less objects and cluttered scenes make this dataset challenging. We follow prior works \cite{YuXiang2017PoseCNNAC, ChenWang2019DenseFusion6O} to split the training set and the testing set, and we also obtain synthesis images for the training set following \cite{peng2019pvnet, he2020pvn3d}.
\textbf{Occlusion LineMOD dataset.} Occlusion LineMOD \cite{EricBrachmann2014Learning6O} is created by additionally annotating a subset of the LineMOD dataset. It contains 8 objects which are out of 13 LineMOD objects. The objects in heavily occlusion makes the dataset challenging.
\begin{table}[t]
\centering
\begin{tabular}{l|ccc}
\hline
\multicolumn{1}{c|}{method} & E & E+F & E+F+S \\ \hline
002\_master\_chef\_can & 95.35 & 95.10 & \textbf{95.52} \\
003\_cracker\_box & \textbf{95.19} & 94.86 & 94.32 \\
004\_sugar\_box & 96.53 & \textbf{96.60} & 96.47 \\
005\_tomato\_soup\_can & 95.19 & 95.15 & \textbf{95.35} \\
006\_mustard\_bottle & 95.94 & 96.15 & \textbf{96.74} \\
007\_tuna\_fish\_can & 95.64 & \textbf{96.05} & 95.80 \\
008\_pudding\_box & 96.47 & 96.18 & \textbf{96.83} \\
009\_gelatin\_box & 97.00 & \textbf{97.54} & 97.30 \\
010\_potted\_meat\_can & 93.45 & \textbf{93.47} & 92.42 \\
011\_banana & 94.61 & 95.05 & \textbf{95.75} \\
019\_pitcher\_base & 95.55 & \textbf{95.95} & 95.56 \\
021\_bleach\_cleanser & \textbf{95.41} & 94.90 & 95.14 \\
\textbf{024\_bowl} & 81.01 & 84.78 & \textbf{88.41} \\
025\_mug & 96.88 & \textbf{96.92} & 96.72 \\
035\_power\_drill & 95.42 & 95.84 & \textbf{95.90} \\
\textbf{036\_wood\_block} & \textbf{87.54} & 83.12 & 86.58 \\
037\_scissors & 90.90 & \textbf{95.43} & 91.84 \\
040\_large\_marker & 93.76 & \textbf{95.00} & 94.37 \\
\textbf{051\_large\_clamp} & 88.48 & 91.93 & \textbf{94.15} \\
\textbf{052\_extra\_large\_clamp} & 84.51 & 90.99 & \textbf{91.22} \\
\textbf{061\_foam\_brick} & 93.94 & \textbf{95.30} & 94.98 \\ \hline
average & 93.57 & 94.11 & \textbf{94.35}
\end{tabular}
\caption{ Ablation studies on different configurations in terms of the ADDS metric, where objects in bold are considered as the symmetric objects. ``E'' only uses the SO(3)-equivariant layers. ``E+F'' adds the SO(3)-invariat layers and the feature fusion strategy, ``E+F+S'' further introduces the SO3loss. }
\label{ycb_abl}
\end{table}
\begin{table*}[h]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{l|cc|cc|cc|cc|cc|cc|cc}
\hline
\multicolumn{1}{c|}{} & \multicolumn{2}{c|}{PoseCNN} & \multicolumn{2}{c|}{PoseCNN+ICP} & \multicolumn{2}{c|}{DF(per-pixel)} & \multicolumn{2}{c|}{DF(iterative)} & \multicolumn{2}{c|}{FFB6D(trained)} & \multicolumn{2}{c|}{Our} & \multicolumn{2}{c}{Our+ICP} \\ \hline
& \multicolumn{1}{c|}{ADDS} & ADD(-S) & \multicolumn{1}{c|}{ADDS} & ADD(-S) & \multicolumn{1}{c|}{ADDS} & ADD(-S) & \multicolumn{1}{c|}{ADDS} & ADD(-S) & \multicolumn{1}{c|}{ADDS} & ADD(-S) & \multicolumn{1}{c|}{ADDS} & ADD(-S) & \multicolumn{1}{c|}{ADDS} & ADD(-S) \\ \hline
002\_master\_chef\_can & \multicolumn{1}{c|}{83.9} & 50.2 & \multicolumn{1}{c|}{95.8} & 68.1 & \multicolumn{1}{c|}{95.3} & 70.7 & \multicolumn{1}{c|}{\textbf{96.4}} & 73.2 & \multicolumn{1}{c|}{95.7} & 78.6 & \multicolumn{1}{c|}{95.5} & 82.2 & \multicolumn{1}{c|}{95.5} & \textbf{82.8} \\
003\_cracker\_box & \multicolumn{1}{c|}{76.9} & 53.1 & \multicolumn{1}{c|}{92.7} & 83.4 & \multicolumn{1}{c|}{92.5} & 86.9 & \multicolumn{1}{c|}{\textbf{95.8}} & \textbf{94.1} & \multicolumn{1}{c|}{94.4} & 90.0 & \multicolumn{1}{c|}{94.3} & 90.6 & \multicolumn{1}{c|}{91.1} & 85.1 \\
004\_sugar\_box & \multicolumn{1}{c|}{84.2} & 68.4 & \multicolumn{1}{c|}{\textbf{98.2}} & \textbf{97.1} & \multicolumn{1}{c|}{95.1} & 90.8 & \multicolumn{1}{c|}{95.8} & 94.1 & \multicolumn{1}{c|}{96.3} & 93.4 & \multicolumn{1}{c|}{96.5} & 94.2 & \multicolumn{1}{c|}{97.3} & 96.3 \\
005\_tomato\_soup\_can & \multicolumn{1}{c|}{81.0} & 66.2 & \multicolumn{1}{c|}{94.5} & 81.8 & \multicolumn{1}{c|}{93.8} & 84.7 & \multicolumn{1}{c|}{94.5} & 85.5 & \multicolumn{1}{c|}{94.3} & 83.1 & \multicolumn{1}{c|}{95.4} & 89.5 & \multicolumn{1}{c|}{\textbf{95.7}} & \textbf{89.9} \\
006\_mustard\_bottle & \multicolumn{1}{c|}{90.4} & 81.0 & \multicolumn{1}{c|}{\textbf{98.6}} & \textbf{98.0} & \multicolumn{1}{c|}{95.8} & 90.9 & \multicolumn{1}{c|}{97.3} & 94.7 & \multicolumn{1}{c|}{96.7} & 94.4 & \multicolumn{1}{c|}{96.7} & 94.6 & \multicolumn{1}{c|}{97.8} & 97.0 \\
007\_tuna\_fish\_can & \multicolumn{1}{c|}{88.0} & 70.7 & \multicolumn{1}{c|}{97.1} & 83.9 & \multicolumn{1}{c|}{95.7} & 79.6 & \multicolumn{1}{c|}{97.1} & 81.9 & \multicolumn{1}{c|}{95.5} & 83.1 & \multicolumn{1}{c|}{95.8} & 86.6 & \multicolumn{1}{c|}{\textbf{97.1}} & \textbf{87.8} \\
008\_pudding\_box & \multicolumn{1}{c|}{79.1} & 62.7 & \multicolumn{1}{c|}{\textbf{97.9}} & \textbf{96.6} & \multicolumn{1}{c|}{94.3} & 89.3 & \multicolumn{1}{c|}{96.0} & 93.3 & \multicolumn{1}{c|}{95.5} & 91.6 & \multicolumn{1}{c|}{96.8} & 94.2 & \multicolumn{1}{c|}{97.5} & 96.1 \\
009\_gelatin\_box & \multicolumn{1}{c|}{87.2} & 75.2 & \multicolumn{1}{c|}{\textbf{98.8}} & \textbf{98.1} & \multicolumn{1}{c|}{97.2} & 95.8 & \multicolumn{1}{c|}{98.0} & 96.7 & \multicolumn{1}{c|}{96.8} & 93.5 & \multicolumn{1}{c|}{97.3} & 94.7 & \multicolumn{1}{c|}{98.5} & 97.8 \\
010\_potted\_meat\_can & \multicolumn{1}{c|}{78.5} & 59.5 & \multicolumn{1}{c|}{\textbf{92.7}} & 83.5 & \multicolumn{1}{c|}{89.3} & 79.6 & \multicolumn{1}{c|}{90.7} & 83.6 & \multicolumn{1}{c|}{89.7} & 82.9 & \multicolumn{1}{c|}{92.4} & \textbf{84.5} & \multicolumn{1}{c|}{91.6} & 80.9 \\
011\_banana & \multicolumn{1}{c|}{86.0} & 72.3 & \multicolumn{1}{c|}{97.1} & 91.9 & \multicolumn{1}{c|}{90.0} & 76.7 & \multicolumn{1}{c|}{96.2} & 83.3 & \multicolumn{1}{c|}{96.8} & 94.0 & \multicolumn{1}{c|}{95.8} & 91.3 & \multicolumn{1}{c|}{\textbf{97.5}} & \textbf{95.1} \\
019\_pitcher\_base & \multicolumn{1}{c|}{77.0} & 53.3 & \multicolumn{1}{c|}{\textbf{97.8}} & 96.9 & \multicolumn{1}{c|}{93.6} & 87.1 & \multicolumn{1}{c|}{97.5} & \textbf{96.9} & \multicolumn{1}{c|}{95.5} & 91.4 & \multicolumn{1}{c|}{95.6} & 91.7 & \multicolumn{1}{c|}{97.1} & 95.8 \\
021\_bleach\_cleanser & \multicolumn{1}{c|}{71.6} & 50.3 & \multicolumn{1}{c|}{96.9} & 92.5 & \multicolumn{1}{c|}{94.4} & 87.5 & \multicolumn{1}{c|}{95.9} & 89.9 & \multicolumn{1}{c|}{95.4} & 90.6 & \multicolumn{1}{c|}{95.1} & 91.4 & \multicolumn{1}{c|}{\textbf{96.9}} & \textbf{94.9} \\
\textbf{024\_bowl} & \multicolumn{1}{c|}{69.6} & 69.6 & \multicolumn{1}{c|}{81.0} & 81.0 & \multicolumn{1}{c|}{86.0} & 86.0 & \multicolumn{1}{c|}{\textbf{89.5}} & \textbf{89.5} & \multicolumn{1}{c|}{86.2} & 86.2 & \multicolumn{1}{c|}{88.4} & 88.4 & \multicolumn{1}{c|}{87.8} & 87.8 \\
025\_mug & \multicolumn{1}{c|}{78.2} & 58.5 & \multicolumn{1}{c|}{94.9} & 81.1 & \multicolumn{1}{c|}{95.3} & 83.8 & \multicolumn{1}{c|}{96.7} & 88.9 & \multicolumn{1}{c|}{97.0} & 91.3 & \multicolumn{1}{c|}{96.7} & 91.1 & \multicolumn{1}{c|}{\textbf{97.7}} & \textbf{94.8} \\
035\_power\_drill & \multicolumn{1}{c|}{72.7} & 55.3 & \multicolumn{1}{c|}{\textbf{98.2}} & \textbf{97.7} & \multicolumn{1}{c|}{92.1} & 83.7 & \multicolumn{1}{c|}{96.0} & 92.7 & \multicolumn{1}{c|}{95.9} & 93.1 & \multicolumn{1}{c|}{95.9} & 93.2 & \multicolumn{1}{c|}{96.6} & 94.8 \\
\textbf{036\_wood\_block} & \multicolumn{1}{c|}{64.3} & 64.3 & \multicolumn{1}{c|}{87.6} & 87.6 & \multicolumn{1}{c|}{89.5} & 89.5 & \multicolumn{1}{c|}{\textbf{92.8}} & \textbf{92.8} & \multicolumn{1}{c|}{91.5} & 91.5 & \multicolumn{1}{c|}{86.6} & 86.6 & \multicolumn{1}{c|}{91.6} & 91.6 \\
037\_scissors & \multicolumn{1}{c|}{56.9} & 35.8 & \multicolumn{1}{c|}{91.7} & 78.4 & \multicolumn{1}{c|}{90.1} & 77.4 & \multicolumn{1}{c|}{92.0} & 77.9 & \multicolumn{1}{c|}{\textbf{94.1}} & \textbf{83.0} & \multicolumn{1}{c|}{91.8} & 81.0 & \multicolumn{1}{c|}{88.5} & 73.1 \\
040\_large\_marker & \multicolumn{1}{c|}{71.7} & 58.3 & \multicolumn{1}{c|}{97.2} & 85.3 & \multicolumn{1}{c|}{95.1} & 89.1 & \multicolumn{1}{c|}{97.6} & 93.0 & \multicolumn{1}{c|}{94.7} & 85.2 & \multicolumn{1}{c|}{94.4} & 86.9 & \multicolumn{1}{c|}{\textbf{97.6}} & \textbf{88.3} \\
\textbf{051\_large\_clamp} & \multicolumn{1}{c|}{50.2} & 50.2 & \multicolumn{1}{c|}{75.2} & 75.2 & \multicolumn{1}{c|}{71.5} & 71.5 & \multicolumn{1}{c|}{72.5} & 72.5 & \multicolumn{1}{c|}{91.0} & 91.0 & \multicolumn{1}{c|}{94.2} & 94.2 & \multicolumn{1}{c|}{\textbf{95.8}} & \textbf{95.8} \\
\textbf{052\_extra\_large\_clamp} & \multicolumn{1}{c|}{44.1} & 44.1 & \multicolumn{1}{c|}{64.4} & 64.4 & \multicolumn{1}{c|}{70.2} & 70.2 & \multicolumn{1}{c|}{69.9} & 69.9 & \multicolumn{1}{c|}{91.2} & 91.2 & \multicolumn{1}{c|}{91.2} & 91.2 & \multicolumn{1}{c|}{\textbf{91.7}} & \textbf{91.7} \\
\textbf{061\_foam\_brick} & \multicolumn{1}{c|}{88.0} & 88.0 & \multicolumn{1}{c|}{\textbf{97.2}} & \textbf{97.2} & \multicolumn{1}{c|}{92.2} & 92.2 & \multicolumn{1}{c|}{92.0} & 92.0 & \multicolumn{1}{c|}{93.8} & 93.8 & \multicolumn{1}{c|}{95.0} & 95.0 & \multicolumn{1}{c|}{97.0} & 97.0 \\ \hline
average & \multicolumn{1}{c|}{75.8} & 59.9 & \multicolumn{1}{c|}{93.0} & 85.4 & \multicolumn{1}{c|}{91.2} & 82.9 & \multicolumn{1}{c|}{93.2} & 86.1 & \multicolumn{1}{c|}{94.2} & 89.1 & \multicolumn{1}{c|}{94.4} & 90.1 & \multicolumn{1}{c|}{\textbf{95.1}} & \textbf{91.1} \\ \hline
\end{tabular}
}
\caption{Quantitative evaluation of ADDS AUC and ADD(-S) AUC metrics on the YCB-Video dataset. Objects in bold are considered as the symmetric objects. FFB6D(trained) indicates the model is trained
under its original settings \cite{he2021ffb6d}. }
\label{ycb_other}
\end{table*}
\begin{table*}[ht]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{l|cccc|ccccccc}
\hline
\multicolumn{1}{c|}{} & \multicolumn{4}{c|}{RGB} & \multicolumn{7}{c}{RGB-D} \\ \hline
& \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}PoseCNN+\\ DeepIM\end{tabular}} & \multicolumn{1}{c|}{PVNet} & \multicolumn{1}{c|}{CDPN} & DPOD & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Point-\\ Fusion\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Dense-\\ Fusion\end{tabular}} & \multicolumn{1}{c|}{G2LNet} & \multicolumn{1}{c|}{PVN3D} & \multicolumn{1}{c|}{SS-Conv} & \multicolumn{1}{c|}{FFB6D} & our \\ \hline
ape & \multicolumn{1}{c|}{77.0} & \multicolumn{1}{c|}{43.6} & \multicolumn{1}{c|}{64.4} & 87.7 & \multicolumn{1}{c|}{70.4} & \multicolumn{1}{c|}{92.3} & \multicolumn{1}{c|}{96.8} & \multicolumn{1}{c|}{97.3} & \multicolumn{1}{c|}{97.4} & \multicolumn{1}{c|}{98.4} & \textbf{98.6} \\
benchvise & \multicolumn{1}{c|}{97.5} & \multicolumn{1}{c|}{99.9} & \multicolumn{1}{c|}{97.8} & 98.5 & \multicolumn{1}{c|}{80.7} & \multicolumn{1}{c|}{93.2} & \multicolumn{1}{c|}{96.1} & \multicolumn{1}{c|}{99.7} & \multicolumn{1}{c|}{99.3} & \multicolumn{1}{c|}{\textbf{100.0}} & \textbf{100.0} \\
camera & \multicolumn{1}{c|}{93.5} & \multicolumn{1}{c|}{86.9} & \multicolumn{1}{c|}{91.7} & 96.1 & \multicolumn{1}{c|}{60.8} & \multicolumn{1}{c|}{94.4} & \multicolumn{1}{c|}{98.2} & \multicolumn{1}{c|}{99.6} & \multicolumn{1}{c|}{99.5} & \multicolumn{1}{c|}{\textbf{99.9}} & \textbf{99.9} \\
can & \multicolumn{1}{c|}{96.5} & \multicolumn{1}{c|}{95.5} & \multicolumn{1}{c|}{95.9} & 99.7 & \multicolumn{1}{c|}{61.1} & \multicolumn{1}{c|}{93.1} & \multicolumn{1}{c|}{98.0} & \multicolumn{1}{c|}{99.5} & \multicolumn{1}{c|}{99.6} & \multicolumn{1}{c|}{99.8} & \textbf{100.0} \\
cat & \multicolumn{1}{c|}{82.1} & \multicolumn{1}{c|}{79.3} & \multicolumn{1}{c|}{83.8} & 94.7 & \multicolumn{1}{c|}{79.1} & \multicolumn{1}{c|}{96.5} & \multicolumn{1}{c|}{99.2} & \multicolumn{1}{c|}{99.8} & \multicolumn{1}{c|}{99.8} & \multicolumn{1}{c|}{99.9} & \textbf{100.0} \\
driller & \multicolumn{1}{c|}{95.0} & \multicolumn{1}{c|}{96.4} & \multicolumn{1}{c|}{96.2} & 98.8 & \multicolumn{1}{c|}{47.3} & \multicolumn{1}{c|}{87.0} & \multicolumn{1}{c|}{99.8} & \multicolumn{1}{c|}{99.3} & \multicolumn{1}{c|}{99.6} & \multicolumn{1}{c|}{100.0} & \textbf{100.0} \\
duck & \multicolumn{1}{c|}{77.7} & \multicolumn{1}{c|}{52.6} & \multicolumn{1}{c|}{66.8} & 86.3 & \multicolumn{1}{c|}{63.0} & \multicolumn{1}{c|}{92.3} & \multicolumn{1}{c|}{97.7} & \multicolumn{1}{c|}{98.2} & \multicolumn{1}{c|}{97.8} & \multicolumn{1}{c|}{98.4} & \textbf{98.5} \\
\textbf{eggbox} & \multicolumn{1}{c|}{97.1} & \multicolumn{1}{c|}{99.2} & \multicolumn{1}{c|}{99.7} & 99.9 & \multicolumn{1}{c|}{99.9} & \multicolumn{1}{c|}{99.8} & \multicolumn{1}{c|}{\textbf{100.0}} & \multicolumn{1}{c|}{99.8} & \multicolumn{1}{c|}{99.9} & \multicolumn{1}{c|}{\textbf{100.0}} & \textbf{100.0} \\
\textbf{glue} & \multicolumn{1}{c|}{99.4} & \multicolumn{1}{c|}{95.7} & \multicolumn{1}{c|}{99.6} & 96.8 & \multicolumn{1}{c|}{99.3} & \multicolumn{1}{c|}{\textbf{100.0}} & \multicolumn{1}{c|}{\textbf{100.0}} & \multicolumn{1}{c|}{\textbf{100.0}} & \multicolumn{1}{c|}{99.6} & \multicolumn{1}{c|}{\textbf{100.0}} & \textbf{100.0} \\
holepuncher & \multicolumn{1}{c|}{52.8} & \multicolumn{1}{c|}{82.0} & \multicolumn{1}{c|}{85.8} & 86.9 & \multicolumn{1}{c|}{71.8} & \multicolumn{1}{c|}{92.1} & \multicolumn{1}{c|}{99.0} & \multicolumn{1}{c|}{99.9} & \multicolumn{1}{c|}{99.4} & \multicolumn{1}{c|}{99.8} & \textbf{100.0} \\
iron & \multicolumn{1}{c|}{98.3} & \multicolumn{1}{c|}{98.9} & \multicolumn{1}{c|}{97.9} & \textbf{100.0} & \multicolumn{1}{c|}{83.2} & \multicolumn{1}{c|}{97.0} & \multicolumn{1}{c|}{99.3} & \multicolumn{1}{c|}{99.7} & \multicolumn{1}{c|}{99.2} & \multicolumn{1}{c|}{99.9} & \textbf{100.0} \\
lamp & \multicolumn{1}{c|}{97.5} & \multicolumn{1}{c|}{99.3} & \multicolumn{1}{c|}{97.9} & 96.8 & \multicolumn{1}{c|}{62.3} & \multicolumn{1}{c|}{95.3} & \multicolumn{1}{c|}{99.5} & \multicolumn{1}{c|}{99.8} & \multicolumn{1}{c|}{99.7} & \multicolumn{1}{c|}{99.9} & \textbf{100.0} \\
phone & \multicolumn{1}{c|}{87.7} & \multicolumn{1}{c|}{92.4} & \multicolumn{1}{c|}{90.8} & 94.7 & \multicolumn{1}{c|}{78.8} & \multicolumn{1}{c|}{92.8} & \multicolumn{1}{c|}{98.9} & \multicolumn{1}{c|}{99.5} & \multicolumn{1}{c|}{98.2} & \multicolumn{1}{c|}{99.7} & \textbf{99.9} \\ \hline
MEAN & \multicolumn{1}{c|}{88.6} & \multicolumn{1}{c|}{86.3} & \multicolumn{1}{c|}{89.9} & 95.2 & \multicolumn{1}{c|}{73.7} & \multicolumn{1}{c|}{94.3} & \multicolumn{1}{c|}{98.7} & \multicolumn{1}{c|}{99.4} & \multicolumn{1}{c|}{99.2} & \multicolumn{1}{c|}{99.7} & \textbf{99.8} \\ \hline
\end{tabular}
}
\caption{\PHR{Quantitative evaluation using the ADD(-S)-0.1d metric on the LineMOD dataset. Objects with bold name are symmetric.}}
\label{lm_other}
\end{table*}
\begin{table*}[ht]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{l|c|c|c|c|c|c|c|c|c}
\hline
\multicolumn{1}{c|}{Method} & PoseCNN & Pix2Pose & PVNet & DPOD & \begin{tabular}[c]{@{}c@{}}Hu et \\ al.\end{tabular} & HybridPose & PVN3D & FFB6D & our \\ \hline
ape & 9.6 & 22.0 & 15.8 & - & 19.2 & 20.9 & 33.9 & 47.2 & \textbf{49.7} \\
can & 45.2 & 44.7 & 63.3 & - & 65.1 & 75.3 & 88.6 & 85.2 & \textbf{88.8} \\
cat & 0.9 & 22.7 & 16.7 & - & 18.9 & 24.9 & 39.1 & 45.7 & \textbf{50.9} \\
driller & 41.4 & 44.7 & 65.7 & - & 69.0 & 70.2 & 78.4 & 81.4 & \textbf{88.6} \\
duck & 19.6 & 15.0 & 25.2 & - & 25.3 & 27.9 & 41.9 & 53.9 & \textbf{58.1} \\
\textbf{eggbox} & 22.0 & 25.2 & 50.2 & - & 52.0 & 52.4 & \textbf{80.9} & 70.2 & 60.8 \\
\textbf{glue} & 38.5 & 32.4 & 49.6 & - & 51.4 & 53.8 & 68.1 & 60.1 & \textbf{71.0} \\
holepuncher & 22.1 & 49.5 & 39.7 & - & 45.6 & 54.2 & 74.7 & \textbf{85.9} & 82.4 \\ \hline
MEAN & 24.9 & 32.0 & 40.8 & 47.3 & 43.3 & 47.5 & 63.2 & 66.2 & \textbf{68.4} \\ \hline
\end{tabular}
}
\caption{\PHR{Quantitative evaluation using the ADD(-S)-0.1d metric on the Occlusion LineMOD dataset. Hu et al. means the paper \cite{YinlinHu2020SingleStage6O}. Symmetric objects' names are in bold.}}
\label{occlm_other}
\end{table*}
\subsection{Metrics}
We evaluate our method with the average distance metrics ADD \cite{StefanHinterstoisser2011MultimodalTF} and ADD-S \cite{YuXiang2017PoseCNNAC}. For non-symmetric objects, the ADD metric calculates the point-pair average distance between object model vertices transformed by the ground truth pose $[R^{*}, t^{*}]$ and the predicted pose $[R, t]$:
\begin{equation}
\mathrm{ADD}=\frac{1}{m} \sum_{x \in \mathcal{O}}\left\|(R^{*} x+t^{*})-\left(R x+t\right)\right\|.
\end{equation}
where $x$ denotes a vertex in the object model $\mathcal{O}$, m is the number of vertices. For symmetric objects, the ADD-S metric calculates the mean distance based on the closest point distance:
\begin{equation}
\mathrm{ADD}\mbox{-} \mathrm{S}=\frac{1}{m} \sum_{x_{1} \in \mathcal{O}} \min _{x_{2} \in \mathcal{O}}\left\|\left(R^{*} x_{1}+t^{*}\right)-\left(R x_{2}+t\right)\right\|.
\end{equation}
In the YCB-Video dataset, we follow prior works \cite{YuXiang2017PoseCNNAC, ChenWang2019DenseFusion6O, he2021ffb6d} to report the area under the ADDS and ADD(-S) \cite{hinterstoisser2012model} curve (AUC) and set the maximum threshold of AUC to be 0.1m. The ADD(-S) calculates ADD for non-symmetric objects and ADDS for symmetric objects. In the LineMOD dataset, we follow \cite{peng2019pvnet} to use the ADD(-S)-0.1d which indicates that the estimated pose is correct When the ADD(-S) distance is less than 10\% of the model's diameter.
\subsection{ Implementation Details}
We train our model on the original YCB-Video dataset and the additional rendering LineMOD dataset following FFB6D \cite{he2021ffb6d}. We select the PSPNet \cite{HengshuangZhao2017PyramidSP} as the 2D feature extractor which is widely used in the field of image segmentation. We design the point cloud feature extractor by the component presented by VNN \cite{deng2021vector}.
In the training stage, we set the loss function weights $\lambda_{1}$, $\lambda_{2}$, $\lambda_{3}$ and $\lambda_{4}$ as 1.0, 1.0, 1.0, 0.5, respectively. We train by 20 epochs in the YCB-Video dataset and 10 epochs in the LineMOD dataset for single objects.
\subsection{Ablation Studies}
We conduct ablation studies to evaluate the effect of the feature fusion strategy and the SO(3) loss on the YCB-Video dataset. Table \ref{ycb_abl} summarizes the results of ablation studies on the YCB-video dataset.
The column ``E'' is based on the SO(3)-equivariance layers and uses the concatenation of SO(3)-equivariant features and appearance features to relate both the semantic label and keypoint offset. Column ``E+F'' denotes the base SO(3)-equivariance layer and an additional feature fusion strategy.
Column ``E+F+S'' shows the results of further adding the SO(3) loss. The 2D encoder is the same for all models.
To validate the benefit of the feature fusion strategy, we compare the column ``E+F'' with the column ``E''. The results show that the feature fusion strategy can learn a better mapping from the feature space to the semantic space and keypoint offset space, thus increasing the accuracy of pose estimation.
To analyze the SO(3) loss, we compare the pose estimation results based on the ``E+F'' model. The results in Column ``E+F+S'' demonstrate that adding the SO(3) loss improves the accuracy of pose estimation.
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{ycbvis.pdf}
\caption{Visual Comparison with FFB6D \cite{he2021ffb6d} on YCB-Video. Different objects in the same scene are in different colors. The points are projected back to the image after being transformed by the predicted pose.
}
\label{vis}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{LMvis.pdf}
\caption{Visual Comparison with FFB6D \cite{he2021ffb6d}. We select two challenging objects (ape boxed in blue and duck boxed in orange) from LineMOD.
}
\label{lmvis}
\end{figure*}
\subsection{Comparisons}
We compare our method with the state-of-the-art methods which take RGB or RGB-D as input and output the 6D object pose.
\textbf{Performance on the YCB-Video dataset.} In Table \ref{ycb_other}, we compare our method with PoseCNN \cite{YuXiang2017PoseCNNAC}, DenseFusion \cite{ChenWang2019DenseFusion6O}, FFB6D \cite{he2021ffb6d} on the YCB-Video dataset in terms of the ADDS AUC metric and ADD(-S) AUC metric. Our method achieves competitive performance with its competitors. \cite{YuXiang2017PoseCNNAC, ChenWang2019DenseFusion6O} directly regress the rotation matrix and translation vector, while our two-stage method
first predicts the keypoint localization and then fits the 6D pose parameters.
The results show that our method performs well for the large clamp and the extra-large clamp which are difficult to detect because of their symmetry and similarity. \PHR{Comparing with other methods with refinement procedure, our model with ICP outperforms PoseCNN+ICP by 6.7\% and exceeds DF(iterative) by 5.8\% on the ADD(-S) metric.}
\textbf{Performance on the LineMOD dataset.} In Table \ref{lm_other}, we compare our method with PoseCNN \cite{YuXiang2017PoseCNNAC} + DeepIM \cite{li2018deepim}, PointFusion \cite{DanfeiXu2017PointFusionDS}, PVNet \cite{peng2019pvnet}, DenseFusion \cite{ChenWang2019DenseFusion6O}, G2LNet \cite{WeiChen2020G2LNetGT}, PVN3D \cite{he2020pvn3d}, FFB6D \cite{he2021ffb6d} and SS-Conv \cite{lin2021sparse} on the LineMOD dataset in terms of the ADD(-S)-0.1d metrics. Our method achieves the state-of-the-art performance.
\cite{YuXiang2017PoseCNNAC, peng2019pvnet} are based on the RGB images, and the other methods are based on the RGB-D images.
G2LNet \cite{WeiChen2020G2LNetGT} first learns the global feature,
and then learns the local feature. SS-Conv \cite{lin2021sparse} learns the SE(3)-equivariant feature to relate the rotation space and the translation space. The results show that our design of learning SO(3)-equivariant features
outperforms its competitors. On the ADD(-S)-0.1d metric, our model achieves the SOTA at all objects.
\PHR{
\textbf{Performance on the Occlusion LineMOD dataset.} We use the model trained on the LineMOD dataset for testing on the Occlusion LineMOD dataset.
Table \ref{occlm_other} shows the comparison of our methods with PoseCNN \cite{YuXiang2017PoseCNNAC}, Pix2Pose \cite{KiruPark2019Pix2PosePC}, PVNet \cite{peng2019pvnet}, DPOD \cite{SergeyZakharov2019DPOD6P}, Hu et al. \cite{YinlinHu2020SingleStage6O}, HybridPose \cite{ChenSong2020HybridPose6O}, PVN3D \cite{he2020pvn3d} and FFB6D \cite{he2021ffb6d} in terms of the ADD(-S)-0.1d metric.
Our method achieves the state-of-the-art performance. Although the measurement results of eggbox and holepuncher are lower than PVN3D and FFB6D, but the results of other objects achieve the best performance among all methods. Our model exceeds FFB6D by 3.3\%. In particular, our method outperforms other methods by a margin of 11.4\% in the ADD(-S)-0.1d metric of the cat. The improved performance demonstrates that the proposed method is highly robust to occlusion.
}
As for FFB6D \cite{he2021ffb6d}, we re-train its source code under the same settings suggested by the authors for fair comparisons. We can see that FFB6D performs very well, ranking the second place.
\subsection{Visualization}
We demonstrate some visualization results from the YCB-Video dataset and the LineMOD dataset in Fig. \ref{vis} and Fig. \ref{lmvis}, respectively.
We find in Fig. \ref{vis} that the ground truth of the object boxed in orange (see the right-bottom image) is not accurate, since the YCB-Video dataset is sampled from a fast-moving video, which produces some motion-blurred frames.
The results show our method outperforms its competitor, even in the occlusion scenes, e.g., the object in the yellow box (see Figure \ref{vis} Column 2 and Column 4).
\section{Conclusion}
In this paper, we propose a novel method, called SO(3)-Pose, for 6D instance-level object pose estimation.
We verify that leveraging SO(3)-equivariance features to learn the keypoint offset and leveraging SO(3)-invariance features to learn the semantic label are beneficial to the pose estimation task. We propose a new loss function, namely SO3loss, to train the proposed network smoothly. Extensive experiments on two standard pose estimation datasets demonstrate the effectiveness of our proposed method, and show that it outperforms state-of-the-arts.
However, our method is limited by the simple MLP structure.
In future, we will explore
the powerful structures satisfying the SO(3)-equivariance property, such as transformer, to replace MLP.
\bibliographystyle{eg-alpha-doi}
\section{Introduction}
Please follow the steps outlined in this document very carefully when
submitting your manuscript to Eurographics.
You may as well use the \LaTeX\ source as a template to typeset your own
paper. In this case we encourage you to also read the \LaTeX\ comments
embedded in the document.
\section{Instructions}
Please read the following carefully.
\subsection{Language}
All manuscripts must be in English.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts,
must be kept within a print area
7 inches (17.7 cm) wide by
9.44 inches (24 cm) high. Do not write or print anything
outside the print area. Number your pages on odd sites right
above, on even sites left above, no page number on the first site.
Do not use page numbering within the final version of your paper.
\subsection{Formatting your paper}
All text with the exception of the abstract must be in a two-column format.
The total allowable width of the text area -- including header and footer
lines -- is 177\,mm (7 inch) wide by 245\,mm (9.64 inch) high.
Columns are to be 84\,mm (3.3 inch) wide, with a 8\,mm (0.315 inch) space
between them.
The space between the header line and the first line of the text body and
between the last line of the text body and the footer line is 5\,mm
(0.196 inch) each.
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If
neither is available on your word processor, please use the font
closest in appearance to Times that you have access to. Only
Type-1 fonts will be accepted.
MAIN TITLE. The title should be in Times 17-point, boldface type and
centered. Capitalize the first letter of nouns, pronouns, verbs, adjectives,
and adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and
printed in Times 9-point, non-boldface type. This information is to be
followed by two blank lines.
The ABSTRACT ist to be in a one-column format. The MAIN TEXT is to be in a
two-column format.
MAIN TEXT. Type main text in 9-point Times, single-spaced. Do \emph{not} use
double-spacing. All paragraphs should be indented 1 em (the length of the
dash in the actual font). Make sure your text is fully justified -- that is,
flush left and flush right. Please do not place any additional blank lines
between paragraphs. Figure and table captions should be 9-point Times
boldface type as in Figure~\ref{fig:firstExample}.
\noindent Long captions should be set as in Figure~\ref{fig:ex1} or
Figure~\ref{fig:ex3}.
\begin{figure}[htb]
\caption{\label{fig:ex1}
'Empty' figure only to serve as an example of long caption requiring
more than one line. It is not typed centered but aligned on both sides.}
\end{figure}
\noindent
Figures which need the full textwidth can be typeset as Figure~\ref{fig:ex3}.
\noindent Callouts should be 9-point Times, non-boldface type. Initially
capitalize only the first word of section titles and first-, second-, and
third-order headings.
FIRST-ORDER HEADINGS. (For example, \textbf{1. Introduction}) should be Times
9-point boldface, initially capitalized, flush left, with one blank line
before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, \textbf{2.1. Language}) should be Times
9-point boldface, initially capitalized, flush left, with one blank line
before, and one after. If you require a third-order heading (we discourage
it), use 9-point Times, boldface, initially capitalized, flush left, preceded
by one blank line, followed by a period and your text on the same line.
The headline \emph{(authors / title)} must be shortened if it uses the full
two column width of the main text.
There must be enough space for the page numbers. Please use ``et al.'' if
there are more than three authors and specify a shortened version for your title.
\subsection{Footnotes}
Please do \emph{not} use footnotes at all!
\subsection{References}
List all bibliographical references in 9-point Times, single-spaced, at the
end of your paper in alphabetical order. When referenced in the text, enclose
the citation index in square brackets, for example~\cite{Lous90}. Where
appropriate, include the name(s) of editors of referenced books.
For your references please use the following algorithm:
\begin{itemize}
\item \textbf{one} author: first 3 chars plus year --
e.g.\ \cite{Lous90}
\item \textbf{two}, \textbf{three} or \textbf{four} authors: first char
of each family name plus year -- e.g.\ \cite{Fellner-Helmberg93}
or \cite{Kobbelt97-USHDR} or \cite{Lafortune97-NARF}
\item \textbf{more than 4} authors: first char of family name from
first 3 authors followed by a '*' followed by the year --
e.g.\ \cite{Buhmann:1998:DCQ} or \cite{FolDamFeiHug.etal93}
\end{itemize}
For BibTeX users a style file \ \texttt{eg-alpha.bst} and
\texttt{eg-alpha-doi.bst} \ is available which uses the above algorithm.
For Biber users a style file \ \texttt{EG.bbx} \ is available which uses the above algorithm.
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered.
\begin{figure}[htb]
\centering
\includegraphics[width=.8\linewidth]{sampleFig}
%
\parbox[t]{.9\columnwidth}{\relax
For all figures please keep in mind that you \textbf{must not}
use images with transparent background!
}
%
\caption{\label{fig:firstExample}
Here is a sample figure.}
\end{figure}
If your paper includes images, it is very important that they are of
sufficient resolution to be faithfully reproduced.
To determine the optimum size (width and height) of an image, measure
the image's size as it appears in your document (in millimeters), and
then multiply those two values by 12. The resulting values are the
optimum $x$ and $y$ resolution, in pixels, of the image. Image quality
will suffer if these guidelines are not followed.
Example 1:
An image measures 50\,mm by 75\,mm when placed in a document. This
image should have a resolution of no less than 600 pixels by 900
pixels in order to be reproduced faithfully.
Example 2:
Capturing a screenshot of your entire $1024 \times 768$ pixel display
monitor may be useful in illustrating a concept from your research. In
order to be reproduced faithfully, that $1024 \times 768$ image should
be no larger than 85 mm by 64 mm (approximately) when placed in your
document.
\subsection{Color}
\textbf{Please observe:} as of 2003 publications in the proceedings of the
Eurographics Conference can use color images throughout the paper. No
separate color tables are necessary.
However, workshop proceedings might have different agreements!
Figure~\ref{fig:ex3} is an example for creating color plates.
\subsection{Embedding of Hyperlinks / Typesetting of URLs}
Due to the use of the package \texttt{hyperref} the original behavior
of the command $\backslash$\texttt{url} from the package \texttt{url}
is not available. To circumvent this problem we either recommend to
use the command $\backslash$\texttt{httpAddr} from the
included package \texttt{egweblnk} (see below) or to replace the
command $\backslash$\texttt{url} by the command $\backslash$\texttt{webLink}
-- e.g. in cases where $\backslash$\texttt{url} has been used
widely in BibTeX-References. In the latter case we suggest to run
BibTeX as usual and then replace all occurences of $\backslash$\texttt{url} by
$\backslash$\texttt{webLink}
\noindent
The provided commands for hyperlinks, in a nutshell, are:
\begin{description} \itemsep 1ex
\item [\webLinkFont $\backslash$httpAddr \{URL without leading 'http:'\}]
\mbox{}\\
e.g. \ \httpAddr{//diglib.eg.org/handle/10.2312/306}
\item [\webLinkFont $\backslash$httpsAddr \{URL without leading 'https:'\}]
\mbox{}\\
e.g. \ \httpsAddr{//diglib.eg.org/handle/10.2312/306}
\item[\webLinkFont $\backslash$ftpAddr \{URL without leading 'ftp:'\}]
\mbox{}\\
e.g. \ \ftpAddr{//www.eg.org/EG/DL/ftpupload} %
\item[\webLinkFont $\backslash$URL \{url\}]
\mbox{}\\
e.g. \ \URL{http://diglib.eg.org/handle/10.2312/306}
\item[\webLinkFont $\backslash$MailTo \{Email addr\}]
\mbox{}\\
e.g. \ \MailTo{publishing@eg.org}
\item[\webLinkFont $\backslash$MailToNA \{emailName\}\{@emailSiteAddress\}]
\mbox{}\\
e.g. \ \MailToNA{publishing}{@eg.org}
\item[\webLinkFont $\backslash$webLink\{URL without hyperlink creation\}]
\mbox{}\\
e.g. \ \webLink{http://www.eg.org/some_arbitrary_long/but_useless/URL}
\end{description}
\subsection{PDF Generation}
Your final paper should be delivered as a PDF document with all typefaces
embedded. \LaTeX{} users should use \texttt{dvips} and \texttt{ps2pdf} to
create this PDF document. Adobe Acrobat Distiller may be used in place of
\texttt{ps2pdf}.
Adobe PDFWriter is \emph{not} acceptable for use. Documents created with
PDFWriter will be returned to the author for revision. \texttt{pdftex} and
\texttt{pdflatex} (and its variants) can be used only if the author can
make certain that all typefaces are embedded and images are not downsampled
or subsampled during the PDF creation process.
Users with no access to these PDF creation tools should make available a
PostScript file and we will make a PDF document from it.
The PDF file \emph{must not} be change protected.
\subsubsection*{Configuration Notes: dvips / ps2pdf / etc.}
\noindent
\texttt{dvips} should be invoked with the \texttt{-Ppdf} and \texttt{-G0}
flags in order to use Type 1 PostScript typefaces:
\begin{verbatim}
dvips -t a4 -Ppdf -G0 -o my.ps my.dvi
\end{verbatim}
\noindent
If you are using version 7.x of GhostScript, please use the following method of invoking \texttt{ps2pdf}, in
order to embed all typefaces and ensure that images are not downsampled or subsampled in the PDF
creation process:
\begin{verbatim}
ps2pdf -dMaxSubsetPct=100 \
-dCompatibilityLevel=1.3 \
-dSubsetFonts=true \
-dEmbedAllFonts=true \
-dAutoFilterColorImages=false \
-dAutoFilterGrayImages=false \
-dColorImageFilter=/FlateEncode \
-dGrayImageFilter=/FlateEncode \
-dMonoImageFilter=/FlateEncode \
mypaper.ps mypaper.pdf
\end{verbatim}
If you are using version 8.x of GhostScript, please use this method in place of the example above:
\begin{verbatim}
ps2pdf -dPDFSETTINGS=/prepress \
-dCompatibilityLevel=1.3 \
-dAutoFilterColorImages=false \
-dAutoFilterGrayImages=false \
-dColorImageFilter=/FlateEncode \
-dGrayImageFilter=/FlateEncode \
-dMonoImageFilter=/FlateEncode \
-dDownsampleColorImages=false \
-dDownsampleGrayImages=false \
mypaper.ps mypaper.pdf
\end{verbatim}
\subsubsection*{Configuration Notes: pdftex / pdflatex / etc.}
\noindent
Configuration of these tools to embed all typefaces can be accomplished by editing the \texttt{updmap.cfg} file
to enable inclusion of the standard (or base) 14 typefaces.
Linux users can run the \texttt{updmap} script to do this:
\begin{verbatim}
updmap --setoption pdftexDownloadBase14 true
\end{verbatim}
Windows users should edit the \texttt{updmap.cfg} files found in their TeX installation directories (one or both
of the following may be present):
\begin{verbatim}
INSTALLDIR\texmf\web2c\updmap.cfg
INSTALLDIR\localtexmf\miktex\config\updmap.cfg
\end{verbatim}
Ensure the value for \texttt{pdftexDownloadBase14} is "true," and then follow the instructions found here:
\httpAddr{//docs.miktex.org/manual/} to update your MikTeX installation.
\subsubsection*{Configuration Notes: Acrobat Distiller}
We recommend to use a Distiller job options file that embeds
all typefaces and does not downsample or subsample images when creating the PDF document.
\subsection{Exclusive License Form}
You must include your signed Eurographics Exclusive License Form
when you submit your finished paper. We MUST have this form before
your paper can be published in the proceedings.
\subsection{Conclusions}
Please direct any questions to the production editor in charge of
these proceedings.
\printbibliography
\newpage
\begin{figure*}[tbp]
\centering
\mbox{} \hfill
\includegraphics[width=.3\linewidth]{sampleFig}
\hfill
\includegraphics[width=.3\linewidth]{sampleFig}
\hfill \mbox{}
\caption{\label{fig:ex3}%
For publications with color tables (i.e., publications not offering
color throughout the paper) please \textbf{observe}:
for the printed version -- and ONLY for the printed
version -- color figures have to be placed in the last page.
\newline
For the electronic version, which will be converted to PDF before
making it available electronically, the color images should be
embedded within the document. Optionally, other multimedia
material may be attached to the electronic version. }
\end{figure*}
\end{document}
|
{
"timestamp": "2022-08-18T02:16:31",
"yymm": "2208",
"arxiv_id": "2208.08338",
"language": "en",
"url": "https://arxiv.org/abs/2208.08338"
}
|
\section*{Acknowledgment}
\section{Signed Distance Field Computation}
\label{sect:perc:sdf_appendix}
This section details how a signed distance field can be computed for a 2.5D elevation map. Consider the following general definition for the squared Euclidean distance between a point in space and the closest obstacle:
\begin{equation}
\begin{split}
\mathcal{D}(x, y, z) = \min_{x', y', z'} \left[ \left( x - x' \right)^2 + \left( y - y' \right)^2 + \left( z - z' \right)^2 \right. \\ \left. + \, I(x', y', z')^{\phantom{2}} \right],
\end{split}
\label{eq:perc:distanceDefinition}
\end{equation}
where $I(x', y', z')$ is an indicator function returning $0$ for an obstacle and $\infty$ for empty cells.
As described in \cite{felzenszwalb2012distance}, a full 3D distance transform can be computed by consecutive distance transforms in each dimension of the grid, in arbitrary order. For the elevation map, the distance along the z-direction is trivial. Therefore, starting the algorithm with the z-direction simplifies the computation. First, \eqref{eq:perc:distanceDefinition} can be rewritten as follow,
\begin{align}
\mathcal{D}(x, y, z) &= \min_{x', y'} \left[ \left( x - x' \right)^2 + \left( y - y' \right)^2 + \right. \\ &\qquad\qquad \left. \min_{z'} \left[ \left( z - z' \right)^2 + I(x', y', z') \right]\right], \notag \\
&= \min_{x', y'} \left[ \left( x - x' \right)^2 + \left( y - y' \right)^2 + f_z(x', y', z) \right],
\label{eq:perc:2dSignedDistance}
\end{align}
where $f_z(x', y', z)$ is a function that returns for each horizontal position, the one-dimensional distance transform in z-direction. For an elevation map, this function has the following closed form solution at a given height $z$.
\begin{equation}
f_z(x', y', z) = \begin{cases}
(z - h(x',y'))^2 & \text{if } z \geq h(x',y'), \\
0 & \text{otherwise},
\end{cases}
\end{equation}
where $h(x',y')$ denotes the evaluation of the elevation map.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\linewidth]{figures/pixelDistance.pdf}
\caption{1D example illustrating the effect of distance metric on the SDF. When taking the Euclidean distance between cell centers in (a), the SDF will have a discontinuous gradient across the obstacle border. Taking the distance between cell center and the border of an occupied / free cell as in (b), avoids this issue.}
\label{fig:perc:pixel_distance}
\end{figure}
The same idea can be used to compute the distance to obstacle free space and obtain the negative valued part of the SDF. Adding both distances together provides the full SDF and gradients are computed by finite differences between layers, columns, and rows. However, naively taking the Euclidean distance between cell centers as the minimization of \eqref{eq:perc:2dSignedDistance} leads to incorrect values around obstacle borders, as illustrated in Fig.~\ref{fig:perc:pixel_distance}. We need to account for the fact that the obstacle border is located between cells, not at the cell locations themselves. This can be resolved by adapting \eqref{eq:perc:2dSignedDistance} to account for the discrete nature of the problem.
\begin{equation}
\mathcal{D}(x, y, z) = \min_{ \{x', y' \} \in \mathcal{M}} \left[ d \left( x, x'\right) + d \left( y, y' \right) + f_z(x', y', z) \right],
\label{eq:perc:2dSignedDistanceDiscrete}
\end{equation}
where $\{x', y'\} \in \mathcal{M}$ now explicitly shows that we only minimize over the discrete cells contained in the map, and $d(\cdot, \cdot)$ is a function that returns the squared distance between the center of one cell and the border of another:
\begin{equation}
d(x, x') = \begin{cases}
\left(|x - x'| - 0.5 r\right)^2 & \text{if } x \neq x', \\
0 & \text{otherwise},
\end{cases}
\end{equation}
where $r$ is the resolution of the map. The distance transforms can now be computed based on \eqref{eq:perc:2dSignedDistanceDiscrete}, for each height in parallel, with the 2D version of the algorithm described in \cite{felzenszwalb2012distance}.
\section{Introduction}
\FloatBarrier
\IEEEPARstart{I}{nspired} by nature, the field of legged robotics aims to enable the deployment of autonomous systems in rough and complex environments. Indeed, during the recent DARPA subterranean challenge, legged robots were widely adopted, and highly successful \cite{tranzatto2021cerberus,bouman2020autonomous}. Still, complex terrains that require precise foot placements, e.g., negative obstacles and stepping stones as shown in Fig.~\ref{fig:perc:ANYmal}, remain difficult.
A key challenge lies in the fact that both the terrain and the system dynamics impose constraints on contact location, force, and timing. When taking a model-based approach, mature methods exist for perceptive locomotion with a slow, static gait~\cite{kalakrishnan2010fast,belter2016adaptive,mastalli2020motion,fankhauser2018robust,griffin2019footstep} and for blind, dynamic locomotion that assumes flat terrain~\cite{bellicoso2018dynamic,bledt2017policy,di2018dynamic}. Learning-based controllers have recently shown the ability to generalize blind locomotion to challenging terrain with incredible robustness~\cite{lee2020learning,siekmann2021blind,miki2022learning}. Still, tightly integrating perception to achieve coordinated and precise foot placement remains an active research problem.
In an effort to extend dynamic locomotion to uneven terrain, several methods have been proposed to augment foothold selection algorithms with perceptive information~\cite{jenelten2020perceptive,kim2020vision,villarreal2020mpc}. These approaches build on a strict hierarchy of first selecting footholds and optimizing torso motion afterward. This decomposition reduces the computational complexity but relies on hand-crafted coordination between the two modules. Additionally, separating the legs from the torso optimization makes it difficult to consider kinematic limits and collision avoidance between limbs and terrain.
Trajectory optimization \new{where torso and leg motions are jointly optimized} has shown impressive results in simulation~\cite{mordatch2012discovery,winkler2018gait,dai2014wholebody} and removes the need for engineered torso-foot coordination. Complex motions can be automatically discovered by including the entire terrain in the optimization. However, computation times are often too long for online deployment. Additionally, due to the non-convexity, non-linearity, and discontinuity introduced by optimizing over arbitrary terrain, these methods can get stuck in poor local minima. Dedicated work on providing an initial guess is needed to find feasible motions reliably~\cite{melon2020reliable}.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth,
trim={75 205 0 0},clip]{figures/cover.jpg}
\caption{ANYmal walking on uneven stepping stones. In the shown configuration, the top foothold is \SI{60}{\centi\meter} above the lowest foothold. The top right visualizes the internal terrain representation used by the controller.}
\label{fig:perc:ANYmal}
\end{figure}
This work presents a planning and control framework that optimizes over all degrees of freedom of the robot, considers collision avoidance with the terrain, and enables complex dynamic maneuvers in rough terrain. The method is centered around nonlinear Model Predictive Control (MPC) with a multiple-shooting discretization~\cite{bock1984multiple,rawlings2017model}. However, in contrast to the aforementioned work, where the full terrain is integrated into the optimization, we get a handle on the numerical difficulty introduced by the terrain by exposing the terrain as a series of geometric primitives that approximate the local terrain. In this case, we use convex polygons as foot placement constraints, but different shapes can be used as long as they lead to well-posed constraints in the optimization. Additionally, a signed distance field (SDF) is used for collision avoidance. We empirically demonstrate that such a strategy is an excellent trade-off between giving freedom to the optimization to discover complex motions and the reliability with which we can solve the formulated problem.
\subsection{Contributions}
We present a novel approach to locomotion in challenging terrain where perceptive information needs to be considered and nontrivial motions are required. The complete perception, planning, and control pipeline contains the following contributions:
\begin{itemize}
\item \new{We perform simultaneous and real-time optimization of all degrees of freedom of the robot for dynamic motions across rough terrain. Perceptive information is encoded through a sequence of geometric primitives that capture local foothold constraints and a signed distance field used for collision avoidance.
}
\item \new{The proposed combination of a multiple-shooting transcription, sequential quadratic programming, and a filter-based line-search enables fast and reliable online solutions to the nonlinear optimal control problem.}
\item \new{We provide a detailed description of the implemented MPC problem, its integration with whole-body and reactive control modules, and extensive experimental validation of the resulting locomotion controller.}
\end{itemize}
\new{The MPC implementation is publicly available as part of the OCS2 toolbox\footnote{\href{https://github.com/leggedrobotics/ocs2/tree/main/ocs2\_sqp}{https://github.com/leggedrobotics/ocs2}} \cite{OCS2}. The implemented online segmentation of the elevation map, and the efficient precomputation of a signed distance field are contributed to existing open-source repositories%
\footnote{\new{\href{https://github.com/leggedrobotics/elevation\_mapping\_cupy/tree/main/plane\_segmentation}{https://github.com/leggedrobotics/elevation\_mapping\_cupy}}}%
\footnote{\new{\href{https://github.com/ANYbotics/grid\_map/tree/master/grid\_map\_sdf}{https://github.com/ANYbotics/grid\_map}}}.}
\subsection{Outline}
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\columnwidth]{figures/method_overview.pdf}
\caption{Schematic overview of the proposed method together with the update rate of each component.}
\label{fig:perc:method_overview}
\end{figure}
An overview of the proposed method is given in Fig.~\ref{fig:perc:method_overview}. The perception pipeline at the top of the diagram runs at \SI{20}{\hertz} and is based on an elevation map constructed from pointcloud information. For each map update, classification, segmentation, and other precomputation are performed to prepare for the high number of perceptive queries during motion optimization. At the core of the framework, we use nonlinear MPC at \SI{100}{\hertz} to plan a motion for all degrees of freedom and bring together user input, perceptive information, and the measured state of the robot. Finally, state estimation, whole-body torque control, and reactive behaviors are executed at a rate of \SI{400}{\hertz}.
After a review of related work in section~\ref{sect:perc:related_work}, this paper is structured similarly to Fig.~\ref{fig:perc:method_overview}. First, we present the perception pipeline in section \ref{sect:perc:perception}. Afterward, the formulated optimal control problem and corresponding numerical optimization strategy are discussed in sections~\ref{sect:perc:motion_optimization}~{\&}~\ref{sect:perc:numerical_optimization}. We introduce the motion execution layer in section \ref{sect:perc:motion_execution}. The resulting method is evaluated on the quadrupedal robot ANYmal~\cite{hutter2016anymal} (see Fig.~\ref{fig:perc:ANYmal}) in section~\ref{sect:perc:results}, and concluded with
section~\ref{sect:perc:conclusion}.
\section{Related Work}
\label{sect:perc:related_work}
\subsection{Decomposing locomotion}
When assuming a quasi-static gait with a predetermined stepping sequence, the planning problem on rough terrain can be simplified and decomposed into individual contact transitions, as demonstrated in the early work on \textit{LittleDog} \cite{kolter2008control,kalakrishnan2010fast}. In a one-step-ahead fashion, one can check the next foothold for kinematic feasibility, feasibility w.r.t. the terrain, and the existence of a statically stable transition. This problem can be efficiently solved by sampling and checking candidate footholds \cite{tonneau2018efficient}. Afterward, a collision-free swing leg trajectory to the desired foothold can be generated with CHOMP \cite{zucker2013chomp} based on an SDF. Fully onboard perception and control with such an approach were achieved by Fankhauser et al.~\cite{fankhauser2018robust}. Instead of one-step-ahead planning, an RRT graph can be built to plan further ahead \cite{belter2016adaptive}. Sampling over templated foothold transitions achieves similar results~\cite{mastalli2015online,mastalli2020motion}.
In this work, we turn our attention to dynamic gaits, where statically stable transitions between contact configurations are not available. In model-based approaches to dynamic, perceptive locomotion, a distinction can be made between methods where the footholds locations are determined separately from the torso and those where the foothold locations and torso motions are jointly optimized.
Several methods in which footholds are selected before optimizing the torso motions, initially designed for flat terrain, have been adapted to traverse rough terrain~\cite{bajracharya2013high,bazeille2014quadruped}. These methods typically employ some form of Raibert heuristic \cite{raibert1986legged} to select the next foothold and adapt it based on perceptive information such as a traversability estimate~\cite{wermelinger2016navigation}. The work of Bellicoso et al.~\cite{bellicoso2018dynamic} was extended by including a batch search for feasible footholds based on a given terrain map and foothold scoring \cite{jenelten2020perceptive}. Similarly, in~\cite{kim2020vision}, the foot placement is adapted based on visual information resulting in dynamic trotting and jumping motions. In~\cite{magana2019fast}, the authors proposed to train a convolutional neural network (CNN) to speed up the online evaluation of such a foothold adaptation pipeline. This CNN was combined with the MPC strategy in~\cite{di2018dynamic} to achieve perceptive locomotion in simulation~\cite{villarreal2020mpc}. In~\cite{gangapurwala2022rloc}~and~\cite{yu2021visuallocomotion}, a Reinforcement Learning (RL) policy has replaced the heuristic foothold selection.
However, since foothold locations are chosen before optimizing the torso motion, their effect on dynamic stability and kinematic feasibility is not directly considered, requiring additional heuristics to coordinate feet and torso motions to satisfy whole-body kinematics and dynamics. Moreover, it becomes hard to consider collisions of the leg with the terrain because the foothold is already fixed. In our approach, we use the same heuristics to find a suitable nominal foothold in the terrain. However, instead of fixing the foothold to that particular location, a region is extracted around the heuristic in which the foothold is allowed to be optimized.
The benefit of jointly optimizing torso and leg motions has been demonstrated in the field of trajectory optimization. One of the first demonstrations of simultaneous optimization of foot placement and a zero-moment point (ZMP) \cite{vukobratovic2004zeromoment} trajectory was achieved by adding 2D foot locations as decision variables to an MPC algorithm~\cite{herdt2010online}. More recently, Kinodynamic \cite{farshidian2017efficient}, Centroidal~\cite{orin2013centroidal,sleiman2021unified}, and full dynamics models \cite{pardo2017hybrid,herzog2016structured} have been used for simultaneous optimization of 3D foot locations and body motion. Alternatively, a single rigid body dynamics (SRBD) model \new{or other simplified torso models} can be extended with decision variables for Cartesian foothold locations~\cite{winkler2018gait}\new{,\cite{jenelten2021TAMOLS}}. Real-time capable methods have been proposed with the specification of leg motions on position~\cite{bledt2017policy}, velocity~\cite{farshidian2017realtime}, or acceleration level~\cite{neunert2018wholebody}. One challenge of this line of work is the computational complexity arising from the high dimensional models, already in the case of locomotion on flat terrain. Our method also uses a high-dimensional model and falls in this category. A key consideration when extending the formulations with perceptive information has thus been to keep computation within real-time constraints.
Finally, several methods exist that additionally optimize gait timings or even the contact sequence together with the whole-body motion. This can be achieved through complementarity constraints~\cite{mordatch2012discovery,posa2014direct,dai2014wholebody}, mixed-integer programming\cite{aceituno2018simultaneous,marcucci2017approximate}, or by explicitly integrating contact models into the optimization \cite{neunert2018wholebody,carius2018trajectory}. Alternatively, the duration of each contact phase can be included as a decision variable~\cite{ponton2018ontime,winkler2018gait} or found through bilevel optimization~\cite{farshidian2017sequential,seyde2019locomotion}. \new{
However, such methods are prone to poor local optima and reliably solving the optimization problems in real-time remains challenging.}
\subsection{Terrain representation}
The use of an elevation map has a long-standing history in the field of legged robotics~\cite{herbert1989terrain}, and it is still an integral part of many perceptive locomotion controllers today. Approaches where footholds are selected based on a local search or sampling-based algorithm can directly operate on such a structure. However, more work is needed when integrating the terrain into a gradient-based optimization.
Winkler et al.~\cite{winkler2018gait} uses an elevation map for both foot placement and collision avoidance. The splines representing the foot motion are constrained to start and end on the terrain with equality constraints. An inequality constraint is used to avoid the terrain in the middle of the swing phase. Ignoring the discontinuity and non-convexity from the terrain makes this approach prone to poor local minima, motivating specialized initialization schemes~\cite{melon2020reliable} for this framework.
In~\cite{jenelten2021TAMOLS}, a graduated optimization scheme is used, where a first optimization is carried out over a smoothened version of the terrain. The solution of this first optimization is then used to initialize an optimization over the actual elevation map. In a similar spirit, Mordatch \cite{mordatch2012discovery} considers a general 3D environment and uses a soft-min operator to smoothen the closest point computation. A continuation scheme is used to gradually increase the difficulty of the problem over consecutive optimizations.
Deits et al.~\cite{deits2014footstep} describe a planning approach over rough terrain based on mixed-integer quadratic programming (MIQP). Similar to \cite{griffin2019footstep}, convex safe regions are extracted from the terrain, and footstep assignment to a region is formulated as a discrete decision. The foothold optimization is simplified because only convex, safe regions are considered during planning. \new{Furthermore, the implementation relied on manual seeding of convex regions by a human operator}. We follow the same philosophy of presenting the terrain as a convex region to the optimization. However, we remove the mixed-integer aspect by pre-selecting the convex region. The benefits are two-fold: First, we do not require a global convex decomposition of the terrain, which is a hard problem in general \cite{bertrand2020detecting}, and instead, only produce a local convex region centered around a nominal foothold. Second, the MIQP approach does not allow for nonlinear costs and dynamics, which limits the range of motions that can be expressed. We first explored the proposed terrain representation as part of our previous work~\cite{grandia2021multi}, but relied on offline mapping, manual terrain segmentation, and did not yet consider terrain collisions. \new{In~\cite{bjelonic2022offline}, we applied this idea to wheeled-legged robots, but again relied on offline mapping and segmentation. Moreover, as discussed in the next section, in both \cite{grandia2021multi} and \cite{bjelonic2022offline}, we used a different solver, which was found to be insufficient for the scenarios in this work.}
\begin{figure*}[!t]
\centering
\includegraphics[width=\linewidth]{figures/perceptionPipeline}
\caption{Perception pipeline overview. (\textbf{A}) The elevation map is filtered and classified into steppable and non-steppable cells \new{[Section~\ref{sect:perc:filtering_and_classification}]}. All steppable areas are segmented into planes \new{[Section~\ref{sect:perc:plane_segmentation}]}. After segmentation, the steppablity classification is refined. (\textbf{B}) A signed distance field \new{[Section~\ref{sect:perc:signed_distance_field}]} and torso reference layer \new{[Section~\ref{sect:perc:torso_reference_layer}]} are precomputed to reduce the required computation time during optimization. \textbf{(C)} Convex foothold constraints in \new{\eqref{eq:perc:foothold_position_constraint}} are obtained from the plane segmentation. The signed distance field enables collision avoidance \new{in \eqref{eq:perc:sdf_inequality}}, and the torso reference is used to generate height and orientation references \new{[Section~\ref{sect:perc:reference_generation}]}.}
\label{fig:perc:perception_overview}
\end{figure*}
\subsection{Motion Optimization}
For trajectory optimization, large-scale optimization software like SNOPT~\cite{gill2005snopt} and IPOPT~\cite{wachter2006implementation} are popular. They are the workhorse for offline trajectory optimization in the work of Winkler \cite{winkler2018gait}, Dai \cite{dai2014wholebody}, Mordatch \cite{mordatch2012discovery}, Posa \cite{posa2014direct}, and Pardo \cite{pardo2017hybrid}. These works show a great range of motions in simulation, but it typically takes minutes to hours to find a solution.
A different line of work uses specialized solvers that exploit the sparsity that arises from a sequential decision making process. Several variants of Differential Dynamic Programming (DDP)~\cite{jacobson1970differential} have been proposed in the context of robotic motion optimization, e.g., iLQR~\cite{tassa2012synthesis,howell2019ALTRO}, SLQ~\cite{farshidian2017efficient}, and FDDP~\cite{mastalli2020crocoddyl}.
With a slightly different view on the problem, the field of (nonlinear) model predictive control~\cite{mayne2014model,rawlings2017model} has specialized in solving successive optimal control problems under real-time constraints. See \cite{kouzoupis2018recent} for a comparison of state-of-the-art quadratic programming (QP) solvers that form the core of second-order optimization approaches to the nonlinear problem. For time-critical applications, the real-time iteration scheme can be used to trade optimality for lower computational demands~\cite{diehl2005realtime}: In a Sequential Quadratic Programming (SQP) approach to the nonlinear problem, at each control instance, only a single QP optimization step is performed.
The current work was initially built on top of a solver in the first category~\cite{farshidian2017efficient}. However, a significant risk in classical DDP-based approaches is the need to perform a nonlinear system rollout along the entire horizon. Despite the use of a feedback policy, these forward rollouts can diverge, especially in the presence of underactuated dynamics. This same observation motivated Mastalli et al.\ to design FDDP to maintain \textit{gaps} between shorter rollouts, resulting in a formulation that is equivalent to direct multiple-shooting formulations with only equality constraints~\cite{mastalli2020crocoddyl,bock1984multiple}. \new{Giftthaler et al.\ \cite{giftthaler2018family} studied several combinations of iLQR and multiple-shooting but did not yet consider constraints beyond system dynamics nor a line-search procedure to determine the stepsize. Furthermore, experiments were limited to simple, flat terrain walking.}
We directly follow the multiple-shooting approach with a real-time iteration scheme and leverage the efficient structure exploiting QP solver HPIPM~\cite{frison2020hpipm}. However, as also mentioned in \new{both}~\cite{mastalli2020crocoddyl} \new{and \cite{giftthaler2018family}}, one difficulty is posed in deciding a stepsize for nonlinear problems, where one now has to monitor both the violation of the system dynamics and minimization of the cost function. To prevent an arbitrary trade-off through a merit function, we suggest using a filter-based line-search instead~\cite{fletcher2002nonlinear}, which allows a step to be accepted if it reduces either the objective function or the constraint violation. \new{As we will demonstrate in the result section, these choices contribute to the robustness of the solver in challenging scenarios.}
\section{Terrain perception and segmentation}
\label{sect:perc:perception}
An overview of the perception pipeline and its relation to the MPC controller is provided in Fig.~\ref{fig:perc:perception_overview}. The pipeline can be divided into three parts: (\textbf{A}) steppability classification and segmentation, (\textbf{B}) precomputation of the SDF and torso reference, and (\textbf{C}) integration into the optimal control problem.
The elevation map, represented as a 2.5D grid \cite{fankhauser2016universal} with a \SI{4}{\centi\meter} resolution is provided by the GPU based implementation introduced in \cite{miki2022elevation}. The subsequent map processing presented in this work runs on the CPU and is made available as part of that same open-source library. Both (\textbf{A}) and (\textbf{B}) are computed once per map and run \new{at \SI{20}{\hertz},} asynchronously to the motion optimization in (\textbf{C}).
\subsection{Filtering \& Classification}\label{sect:perc:filtering_and_classification}
The provided elevation map contains empty cells in occluded areas. As a first step, we perform \textit{inpainting} by filling each cell with the minimum value found along the occlusion border. Afterwards, a median filter is used to reduce noise and outliers in the map.
Steppablity classification is performed by thresholding the local surface inclination and the local roughness estimated through the standard deviation~\cite{chilian2009stereo}. Both quantities can be computed with a single pass through the considered neighbourhood of size $N$:
\begin{equation}
\mbox{\boldmath $\mu$} = \frac{1}{N} \sum_i \mathbf c_i, \quad \mathbf S = \frac{1}{N} \sum_i \mathbf c_i \mathbf c^\top_i, \quad \mathbf \Sigma = \mathbf S - \mbox{\boldmath $\mu$} \mbox{\boldmath $\mu$}^\top,
\label{eq:perc:covariance}
\end{equation}
where $\mbox{\boldmath $\mu$}$ and $\mathbf S$ are the first and second moment, and $\mathbf \Sigma \in \mathbb{R}^{3\times3}$ is the positive semi-definite covariance matrix of the cell positions $\mathbf c_i$. The variance in normal direction, $\sigma_n^2$, is then the smallest eigenvalue of $\mathbf \Sigma$, and the surface normal, $\mathbf n$, is the corresponding eigenvector. For steppability classification we use a neighbourhood of $N=9$, and set a threshold of \SI{2}{\centi\meter} on the standard deviation in normal direction and a maximum inclination of \SI{35}{\degree}, resulting in the following classification:
\begin{equation}
\text{steppability} = \begin{cases}
1 & \text{if } \sigma_n \leq 0.02, \text{ and } n_z \geq 0.82, \\
0 & \text{otherwise},
\end{cases}
\label{eq:perc:steppability}
\end{equation}
where $n_z$ denotes the z-coordinate of the surface normal.
\subsection{Plane Segmentation}
\label{sect:perc:plane_segmentation}
After the initial classification, the plane segmentation starts by identifying continuous regions with the help of a connected component labelling \cite{wu2009optimizing}. For each connected region of cells, we compute again the covariance as in \eqref{eq:perc:covariance}, where $N$ is now the number of cells in the connected region, and accept the region as a plane based on the following criteria:
\begin{equation}
\text{planarity} = \begin{cases}
1 & \text{if } \sigma_n \leq 0.025, n_z \geq 0.87, \text{ and } N \geq 4 \\
0 & \text{otherwise}.
\end{cases}
\label{eq:perc:planarity}
\end{equation}
Notice that here we loosen the bound on the standard deviation to \SI{2.5}{cm}, tighten the bound on the inclination to \SI{30}{\degree}, and add the constraint that at least 4 cells form a region.
If the planarity condition is met, the surface normal and mean of the points define the plane.
If a region fails the planarity condition, we trigger RANSAC \cite{schnabel2007efficient} on that subset of the data. The same criteria in \eqref{eq:perc:planarity} are used to find smaller planes within the connected region. After the algorithm terminates, all cells that have not been included in any plane have their steppability updated and set to 0.
At this point, we have a set of plane parameters with connected regions of the map assigned to them. For each of these regions, we now extract a 2D contour from the elevation map \cite{suzuki1985topological}, and project it along the z-axis to the plane to define the boundary in the frame of the plane. It is important to consider that regions can have holes, \new{for example, when a free-standing obstacle is located in the middle of an open floor. The boundary of each segmented region is therefore represented by} an outer polygon together with a set of polygons that trace enclosed holes. \new{See Fig.~\ref{fig:perc:polygon_examples} for an illustrative example of such a segmented region and the local convex approximations it permits.} Finally, if the particular region allows, we shrink the boundary inwards (and holes outwards) to provide a safety margin. If the inscribed area is not large enough, the plane boundary is accepted without margin. In this way we obtain a margin where there is enough space to do so, but at the same time we do not reject small stepping stones, which might be crucial in certain scenarios.
\begin{figure}[!tb]
\centering
\begin{minipage}{.32\columnwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/polygon_3.pdf}
\end{minipage}
\begin{minipage}{.32\columnwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/polygon_2.pdf}
\end{minipage}
\begin{minipage}{.32\columnwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/polygon_1.pdf}
\end{minipage}
\caption{\new{An example of a segmented region represented by a non-convex outer polygon and two non-overlapping holes (drawn in black). Three different local convex approximations (drawn in orange) are shown that are found around query points with the iterative algorithm described in section~\ref{sect:perc:cost_definition}.}}
\label{fig:perc:polygon_examples}
\end{figure}
\subsection{Signed Distance Field}\label{sect:perc:signed_distance_field}
Before computing the SDF, we take advantage of the classification between terrain that will be potentially stepped on and terrain that will not be stepped on. To all cells that are non-steppable, we add a vertical margin of \SI{2}{\centi\meter}, and dilate the elevation by one cell. The latter effectively horizontally inflates all non-steppable areas by the map resolution. This procedure corrects for the problem that edges tend to be underestimated in the provided elevation map.
We use a dense 3D voxel grid, where each voxel contains the value and 3D gradient. The previous motion plan is used to determine the 3D volume where distance information is needed. This volume is a bounding box that contains all collision bodies of the last available plan with a margin of \SI{25}{\centi\meter}. This way, the size and shape of the SDF grid dynamically scales with the motion that is executed. Storing both value and derivative as proposed in \cite{pankert2020perceptive} allows for efficient interpolation during optimization. However, in contrast to \cite{pankert2020perceptive}, where values and gradients are cached after the first call, we opt to precompute the full voxel grid to reduce the computation time during optimization as much as possible.
This is possible by taking advantage of the extra structure that the 2.5D representation provides. A detailed description of how the SDF can be efficiently computed from an elevation map is given in Appendix~\ref{sect:perc:sdf_appendix}.
\subsection{Torso reference map}\label{sect:perc:torso_reference_layer}
With user input defined as horizontal velocity and an angular rate along the z-direction, it is the responsibility of the controller to decide on the height and orientation of the torso. We would like the torso pose to be positioned in such a way that suitable footholds are in reach for all of the feet. We therefore create a layer that is a smooth interpolation of all steppable regions as described in \cite{jenelten2021TAMOLS}. The use of this layer to generate a torso height and orientation reference is presented in section~\ref{sect:perc:reference_generation}.
\section{Motion planning}
\label{sect:perc:motion_optimization}
In this section, we describe the nonlinear MPC formulation. In particular, we set out to define all components in the following nonlinear optimal control problem:
\begin{subequations}
\begin{align}
& \underset{\mathbf u(\cdot)}{\text{minimize}} && \Phi(\mathbf x(T)) + \int_{0}^{T} L(\mathbf x(t),\mathbf u(t), t) \text{ d}t,
\label{eq:perc:mpc_cost} \\
&\text{subject to:} && \mathbf x(0) = \new{\hat{\mathbf x}}, \label{eq:perc:mpc_initial} \\
& & & \dot{\mathbf x} = \mathbf f^c(\mathbf x, \mathbf u, t), \label{eq:perc:mpc_dynamics} \\
& & & \mathbf g(\mathbf x,\mathbf u, t) = \mathbf 0, \label{eq:perc:mpc_eqconstraint}
\end{align} \label{eq:perc:mpc_formulation}%
\end{subequations}
where $\mathbf x(t)$ and $\mathbf u(t)$ are the state and the input at time $t$, \new{and $\hat{\mathbf x}$ is the current measured state}. The term $L(\cdot)$ is a time-varying running cost, and $\Phi(\cdot)$ is the cost at the terminal state $\mathbf x(T)$.
The goal is to find a control signal that minimizes this cost subject to the initial condition, $\mathbf x_0$, system dynamics, $\mathbf f^c(\cdot)$, and equality constraints, $\mathbf g(\cdot)$. Inequality constraints are all handled through penalty functions and will be defined as part of the cost function in section~\ref{sect:perc:cost_definition}.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{figures/stateDefinition.jpg}
\caption{Overview of the coordinates frames and constraints used in the definition of the MPC problem. On the front left foot, a friction cone is shown, defined in the terrain frame $\mathcal{F}_T$. On the right front foot, a swing reference trajectory is drawn between the liftoff frame $\mathcal{F}_{T^-}$ and touchdown frame $\mathcal{F}_{T^+}$. Foot placement constraints are defined as a set of half-spaces in the touchdown frame. Stance legs have collision bodies at the knee, as illustrated on the right hind leg, while swing legs have collision bodies on both the foot and the knee, as shown on the left hind leg.}
\label{fig:perc:state_definition}
\end{figure}
\subsection{Robot definition}
We define the generalized coordinates and velocities as:
\begin{equation}
\mathbf q = \left[\mbox{\boldmath $\theta$}_{B}^\top, \mathbf p_{B}^\top, \mathbf q_{j}^\top \right]^\top{\mkern-16mu,} \quad \dot{\mathbf q} = \left[\mbox{\boldmath $\omega$}_{B}^\top, \mathbf v_{B}^\top, \dot{\mathbf q}_{j}^\top \right]^\top{\mkern-16mu,}
\end{equation}
where $\mbox{\boldmath $\theta$}_B\in\mathbb{R}^{3}$ is the orientation of the base frame, $\mathcal{F}_B$, in Euler angles, $\mathbf p_{B}\in\mathbb{R}^3$ is the position of the base in the world frame, $\mathcal{F}_W$. $\mbox{\boldmath $\omega$}_{B}\in\mathbb{R}^3$ and $\mathbf v_{B}\in\mathbb{R}^3$ are the angular rate and linear velocity of the base in the body frame $\mathcal{F}_B$. Joint positions and velocities are given by $\mathbf q_j\in\mathbb{R}^{12}$ and $\dot{\mathbf q}_{j}\in\mathbb{R}^{12}$. The collection of all contact forces is denoted by ${\mbox{\boldmath $\lambda$}}\in\mathbb{R}^{12}$. When referring to these quantities per leg, we will use a subscript $i$, e.g. $\mathbf q_i\in\mathbb{R}^{3}$ or ${\mbox{\boldmath $\lambda$}}_i\in\mathbb{R}^{3}$. All subscripts for legs in contact are contained in the set $\mathcal{C}$. A graphical illustration of the robot together with the defined coordinate frames is provided in Fig.~\ref{fig:perc:state_definition}.
\subsection{Torso dynamics}
\label{sect:perc:torso_dynamics}
To derive the torso dynamics used in this work, consider the full rigid body dynamics of the robot,
\begin{equation}
\mathbf M(\mathbf q)\ddot{\mathbf q}+\mathbf n(\mathbf q,\dot{\mathbf q}) = \mathbf S^\top \mbox{\boldmath $\tau$} + \mbox{\boldmath $\tau$}^{\text{dist}} + \sum_{i\in \mathcal{C}} \mathbf J^\top_i(\mathbf q) \mbox{\boldmath $\lambda$}_i,
\label{eq:perc:full_rigid_body_dynamics}
\end{equation}
with inertia matrix $\mathbf M:\mathbb{R}^{18}\to\mathbb{R}^{18\times 18}$, generalized accelerations $\ddot{\mathbf q}\in\mathbb{R}^{18}$, and nonlinear terms $\mathbf n:\mathbb{R}^{18}\times\mathbb{R}^{18}\to\mathbb{R}^{18}$ on the left hand side. The right hand contains the selection matrix $\mathbf S = \left[ \mathbf 0_{12\times6}, \, \mathbf I_{12\times12} \right] \in\mathbb{R}^{12\times18}$, actuation torques $\mbox{\boldmath $\tau$}\in\mathbb{R}^{12}$,
disturbance forces $\mbox{\boldmath $\tau$}^{\text{dist}}\in\mathbb{R}^{18}$, contact Jacobians $\mathbf J_i:\mathbb{R}^{18}\to\mathbb{R}^{3\times 18}$, and contact forces ${\mbox{\boldmath $\lambda$}}_i\in\mathbb{R}^{3}$.
For these equations of motion, it is well known that for an articulated system, the underactuated, top 6 rows are of main interest for motion planning \cite{ponton2018ontime}. These so-called centroidal dynamics govern the range of motion that can be achieved \cite{wieber2006holonomy,orin2013centroidal}. Solving the centroidal dynamics for base acceleration gives:
\begin{align}
\begin{bmatrix} \dot{\mbox{\boldmath $\omega$}}_{B} \\ \dot{\mathbf v}_{B} \end{bmatrix}
&=
\mathbf M_{B}^{-1}\left(\mbox{\boldmath $\tau$}^{\text{dist}}_{B} - \mathbf M_{Bj}\Ddot{\mathbf q}_{j} - \mathbf n_{B} + \sum_{i\in \mathcal{C}} \mathbf J^\top_{B,i} \mbox{\boldmath $\lambda$}_i \right), \\
&= \mathbf f_{B}(\mathbf q,\dot{\mathbf q}, \Ddot{\mathbf q}_j, \mbox{\boldmath $\lambda$}, \mbox{\boldmath $\tau$}^{\text{dist}}_{B}),
\label{eq:perc:torso_dynamics}
\end{align}
where $\mathbf M_{B} \in \mathbb{R}^{6\times6}$ is the compound inertia tensor at the top left of $\mathbf M(\mathbf q)$, and $\mathbf M_{Bj} \in \mathbb{R}^{6\times12}$ is the top right block that encodes inertial coupling between the legs and base. The other terms with subscript $B$ correspond to the top 6 rows of the same terms in \eqref{eq:perc:full_rigid_body_dynamics}.
To simplify the torso dynamics, we evaluate this function with zero inertial coupling forces from the joints, i.e.\ $ \mathbf M_{Bj}\Ddot{\mathbf q}_j = \mathbf 0$. This simplification allows us to consider the legs only on velocity level and removes joint accelerations from the formulation. From here, further simplifications would be possible. Evaluating the function at a nominal joint configuration and zero joint velocity creates a constant inertia matrix and gives rise to the commonly used single rigid body assumption. While this assumption is appropriate on flat terrain, the joints move far away from their nominal configuration in this work, creating a significant shift in mass distribution and center of mass location.
\subsection{Input loopshaping}
\label{sect:perc:loopshaping}
The bandwidth limitations of the series elastic actuators used in ANYmal pose an additional constraint on the set of motions that are feasible on hardware. Instead of trying to accurately model these actuator dynamics, we use a frequency-dependent cost function to penalize high-frequency content in the contact forces and joint velocity signals \cite{grandia2019frequency}. For completeness, we present here the resulting system augmentation in the time domain:
\begin{align}
\dot{\mathbf s}_{\lambda} &= \mathbf A_{\lambda} \mathbf s_{\lambda} + \mathbf B_{\lambda} \boldsymbol{ \nu}_{\lambda}, &\dot{\mathbf s}_{j} &= \mathbf A_{j} \mathbf s_{j} + \mathbf B_{j} \boldsymbol{ \nu}_{j}, \label{eq:perc:aug_system}\\
\mbox{\boldmath $\lambda$} &= \mathbf C_{\lambda} \mathbf s_{\lambda} + \mathbf D_{\lambda} \boldsymbol{ \nu}_{\lambda}, &\dot{\mathbf q}_{j} &= \mathbf C_{j} \mathbf s_{j} + \mathbf D_{j} \boldsymbol{ \nu}_{j}, \notag
\end{align}
where $\mathbf s_{\lambda}$ and $\mathbf s_{j}$ are additional states, and $\boldsymbol{ \nu}_{\lambda}$ and $\boldsymbol{ \nu}_{j}$ are auxiliary inputs, associated with contact forces and joint velocities respectively. When the filters ($\boldsymbol{ \nu}_{\lambda} \rightarrow \mbox{\boldmath $\lambda$}$ and $\boldsymbol{ \nu}_{j} \rightarrow \dot{\mathbf q}_{j}$) are low-pass filters, penalizing the auxiliary input is equivalent to penalizing high frequency content in $\mbox{\boldmath $\lambda$}$ and $\dot{\mathbf q}_{j}$.
An extreme case is obtained when choosing $\mathbf A_{\lambda} = \mathbf D_{\lambda} = \mathbf 0$, $\mathbf B_{\lambda} = \mathbf C_{\lambda} = \mathbf I$, in which case the auxiliary input becomes the derivative, $\dot{\mbox{\boldmath $\lambda$}}$. This reduces to the common system augmentation technique that allows penalization of input rates \cite{rawlings2017model}.
In our case we allow some direct control ($\mathbf D \neq \mathbf 0$) and select $\mathbf A_{\lambda} = \mathbf A_{j} = \mathbf 0$, $\mathbf B_{\lambda} = \mathbf B_{j} = \mathbf I$, $\mathbf C_{\lambda} = \frac{100}{4}\mathbf I$, $\mathbf C_{j} = \frac{50}{3}\mathbf I$, $\mathbf D_{\lambda} = \frac{1}{4}\mathbf I$, $\mathbf D_{j} = \frac{1}{3}\mathbf I$. This corresponds to a progressive increase in cost up to a frequency of \SI{100}{\radian\per\second} for $\mbox{\boldmath $\lambda$}$ and up to \SI{50}{\radian\per\second} for $\dot{\mathbf q}_{j}$, where high frequency components have their cost increased by a factor of $4$ and $3$ respectively.
\subsection{System Dynamics}
We are now ready to define the state vector $\mathbf x \in \mathbb{R}^{48}$ and input vector $\mathbf u \in \mathbb{R}^{24}$ used during motion optimization:
\begin{equation}
\mathbf x = \left[\mbox{\boldmath $\theta$}_{B}^\top, \mathbf p_{B}^\top, \mbox{\boldmath $\omega$}_{B}^\top, \mathbf v_{B}^\top, \mathbf q_{j}^\top, \mathbf s_{\lambda}^\top, \mathbf s_{j}^\top \right]^\top{\mkern-16mu,} \quad \mathbf u = \left[\boldsymbol{ \nu}_{\lambda}^\top, \boldsymbol{ \nu}_{j}^\top \right]^\top{\mkern-16mu.}
\end{equation}
Putting together the robot dynamics from section~\ref{sect:perc:torso_dynamics} and system augmentation described in~\ref{sect:perc:loopshaping} gives the continuous time MPC model $\dot{\mathbf x} = \mathbf f^c(\mathbf x, \mathbf u, t)$:
\begin{equation}
\frac{\text{d}}{\textnormal{d}t} \begin{bmatrix}
\mbox{\boldmath $\theta$}_{B} \\ \mathbf p_{B} \\ \mbox{\boldmath $\omega$}_{B} \\ \mathbf v_{B} \\ \mathbf q_{j} \\ \mathbf s_{\lambda} \\ \mathbf s_{j}
\end{bmatrix} =
\begin{bmatrix}
\mathbf T(\mbox{\boldmath $\theta$}_{B}) \mbox{\boldmath $\omega$}_{B} \\
\mathbf R_B(\mbox{\boldmath $\theta$}_{B}) \, \mathbf v_{B} \\
\vspace{-.5em} \\ \vspace{.5em}
\mathbf f_{B}(\mathbf q,\dot{\mathbf q}, \mathbf 0, \mathbf C_{\lambda} \mathbf s_{\lambda} + \mathbf D_{\lambda} \boldsymbol{ \nu}_{\lambda}, \mbox{\boldmath $\tau$}^{\text{dist}}_{B})
\\
\mathbf C_{j} \mathbf s_{j} + \mathbf D_{j} \boldsymbol{ \nu}_{j} \\
\mathbf A_{\lambda} \mathbf s_{\lambda} + \mathbf B_{\lambda} \boldsymbol{ \nu}_{\lambda} \\
\mathbf A_{j} \mathbf s_{j} + \mathbf B_{j} \boldsymbol{ \nu}_{j}
\end{bmatrix},
\label{eq:perc:mpc_full_dynamics}
\end{equation}
where $\mathbf T(\mbox{\boldmath $\theta$}_{B}):\mathbb{R}^{3}\to\mathbb{R}^{3\times 3}$ provides the conversion between angular body rates and Euler angle derivatives, and $\mathbf R_B(\mbox{\boldmath $\theta$}_{B}):\mathbb{R}^{3}\to\mathbb{R}^{3\times 3}$ provides the body to world rotation matrix. The disturbance wrench $\mbox{\boldmath $\tau$}^{\text{dist}}_{B}$ is considered a parameter and is assumed constant over the MPC horizon.
\subsection{Reference generation}
\label{sect:perc:reference_generation}
The user commands 2D linear velocities and an angular rate in the horizontal plane, as well as a desired gait pattern. A full motion and contact force reference is generated to encode these user commands and additional motion preferences into the cost function defined in section~\ref{sect:perc:cost_definition}.
This process is carried out before every MPC iteration.
As a first step, assuming a constant input along the horizon, a 2D base reference position and heading direction are extrapolated in the world frame. At each point in time, the 2D pose is converted to a 2D position for each hip. The smoothened elevation map, i.e.\ the \textit{torso reference} layer shown in Fig~\ref{fig:perc:perception_overview}, is interpolated at the 2D hip location. The interpolated elevation in addition to a desired nominal height, $h_{nom}$, gives a 3D reference position for each hip. A least-squares fit through the four hip positions gives the 6DoF base reference.
The extracted base reference and the desired gait pattern are used to derive nominal foothold locations. Here we use the common heuristic that the nominal foothold is located below the hip, \new{in gravity-aligned direction}, at the middle of the contact phase \cite{raibert1986legged}. Additionally, for the first upcoming foothold, a feedback on the measured velocity is added:
\begin{equation}
\mathbf p_{i,nom} = \mathbf p_{i,hip,nom} + \sqrt{\frac{h_{nom}}{g}} (\mathbf v_{B,meas} - \mathbf v_{B,com}),
\end{equation}
where $\mathbf p_{i,nom} \in \mathbb{R}^{3}$ is the nominal foothold, $\mathbf p_{i,hip,nom} \in \mathbb{R}^{3}$ is the nominal foothold location directly below the hip, and $g$ is the gravitational constant. $\mathbf v_{B,meas}$ and $\mathbf v_{B,com}$ are measured and commanded base velocity respectively.
With the nominal foothold locations known, the plane segmentation defined in section~\ref{sect:perc:plane_segmentation} is used to adapt the nominal foothold locations to the perceived terrain. Each foothold is projected onto the plane that is closest and within kinematic limits. Concretely, we pick the reference foothold, $\mathbf p_{i,\new{proj}}$, according to:
\begin{equation}
\mathop{\mathrm{argmin}}_{\mathbf p_{i,\new{proj}} \in \Pi(\mathbf p_{i,nom})} \|\mathbf p_{i,nom} - \mathbf p_{i,\new{proj}}\|^2_2 + w_{kin} f_{kin}(\mathbf p_{i,\new{proj}}),
\end{equation}
where $\Pi(\mathbf p_{i,nom})$ is \new{a set of candidate points. For each segmented plane we take the point within that region that is closest to the nominal foothold as a candidate. The term} $f_{kin}$ is a kinematic penalty with weight $w_{kin}$ that penalizes the point if the leg extension at liftoff or touchdown is beyond a threshold and if the foothold crosses over to the opposite side of the body. Essentially, this is a simplified version of the foothold batch search algorithm presented in \cite{jenelten2020perceptive}, which searches over cells of the map instead of pre-segmented planes.
After computing all projected footholds, heuristic swing trajectories are computed with two quintic splines; from liftoff to apex and apex to touchdown. The spline is constrained by a desired liftoff and touchdown velocity, and an apex location is selected in such a way that the trajectory clears the highest terrain point between the footholds. Inverse kinematics is used to derive joint position references corresponding to the base and feet references. Finally, contact forces references are computed by dividing the total weight of the robot equally among all feet that are in contact. Joint velocity references are set to zero.
\subsection{Cost \& Soft Inequality Constraints}
\label{sect:perc:cost_definition}
The cost function \eqref{eq:perc:mpc_cost} is built out of several components. The running cost $L(\mathbf x, \mathbf u, t)$ can be split into tracking costs $L_{\mbox{\boldmath $\epsilon$}}$, loopshaping costs $L_{\boldsymbol{ \nu}}$, and penalty costs $L_{\mathcal{B}}$:
\begin{equation}
L = L_{\mbox{\boldmath $\epsilon$}} + L_{\boldsymbol{ \nu}} + L_{\mathcal{B}}.
\end{equation}
The motion tracking cost are used to follow the reference trajectory defined in section~\ref{sect:perc:reference_generation}. Tracking error are defined for the base, $\mbox{\boldmath $\epsilon$}_{B}$, and for each foot ,$\mbox{\boldmath $\epsilon$}_{i}$,
\begin{equation}
\mbox{\boldmath $\epsilon$}_{B} = \begin{bmatrix}
\text{log}(\mathbf R_{B}\mathbf R^\top_{B,ref})^\vee \\
\mathbf p_{B} - \mathbf p_{B,ref} \\ \mbox{\boldmath $\omega$}_{B} - \mbox{\boldmath $\omega$}_{B,ref} \\ \mathbf v_{B} - \mathbf v_{B,ref}
\end{bmatrix}, \, \mbox{\boldmath $\epsilon$}_{i} = \begin{bmatrix}
\mathbf q_{i} - \mathbf q_{i,ref} \\
\dot{\mathbf q}_{i} - \dot{\mathbf q}_{i,ref} \\
\mathbf p_{i} - \mathbf p_{i,ref} \\
\mathbf v_{i} - \mathbf v_{i,ref} \\
\mbox{\boldmath $\lambda$}_{i} - \mbox{\boldmath $\lambda$}_{i,ref} \\
\end{bmatrix},
\end{equation}
where $\text{log}(\mathbf R_{B}\mathbf R^\top_{B,ref})^\vee$ is the logarithmic map of the orientation error, represented as a 3D rotation vector, and $\mathbf p_{i}$ and $\mathbf v_{i}$ are the foot position and velocity in world frame. Together with diagonal, positive definite, weight matrices $\mathbf W_{B}$ and $\mathbf W_{i}$, \new{for which the individual elements are listed in Table~\ref{tab:perc:tracking_weights}}, these errors form the following nonlinear least-squares cost:
\begin{equation}
L_{\mbox{\boldmath $\epsilon$}} = \frac{1}{2}\|\mbox{\boldmath $\epsilon$}_{B}\|^2_{\mathbf W_{B}} +
\sum_{i=1}^4 \frac{1}{2}\|\mbox{\boldmath $\epsilon$}_{i}\|^2_{\mathbf W_{i}}.
\label{eq:perc:motion_cost}
\end{equation}
\begin{table}[tb]
\centering
\caption{\new{Motion tracking weights}}
\label{tab:perc:tracking_weights}
\begin{tabular}{lrr}
Term & Weights \\ \hline
$\text{log}(\mathbf R_{B}\mathbf R^\top_{B,ref})^\vee$ & $(100.0, 300.0, 300.0)$ \\
$ \mathbf p_{B} - \mathbf p_{B,ref} $ & $(1000.0, 1000.0, 1500.0)$ \\
$ \mbox{\boldmath $\omega$}_{B} - \mbox{\boldmath $\omega$}_{B,ref} $ & $(10.0, 30.0, 30.0)$ \\
$ \mathbf v_{B} - \mathbf v_{B,ref} $ & $(15.0, 15.0, 30.0)$ \\
$ \mathbf q_{i} - \mathbf q_{i,ref} $ & $(2.0, 2.0, 1.0)$ \\
$ \dot{\mathbf q}_{i} - \dot{\mathbf q}_{i,ref} $ & $(0.02, 0.02, 0.01)$ \\
$ \mathbf p_{i} - \mathbf p_{i,ref} $ & $(30.0, 30.0, 30.0)$ \\
$ \mathbf v_{i} - \mathbf v_{i,ref} $ & $(15.0, 15.0, 15.0)$ \\
$ \mbox{\boldmath $\lambda$}_{i} - \mbox{\boldmath $\lambda$}_{i,ref} $ & $(0.001, 0.001, 0.001)$
\end{tabular}
\end{table}
As discussed in section~\ref{sect:perc:loopshaping}, high-frequency content in joint velocities and contact forces are penalized through a cost on the corresponding auxiliary input. This cost is a simple quadratic cost:
\begin{equation}
L_{\boldsymbol{ \nu}} = \frac{1}{2} \boldsymbol{ \nu}_{\lambda}^\top \mathbf R_{\lambda} \boldsymbol{ \nu}_{\lambda} + \frac{1}{2} \boldsymbol{ \nu}_{j}^\top \mathbf R_{j} \boldsymbol{ \nu}_{j},
\label{eq:perc:smoothness}
\end{equation}
where $\mathbf R_{\lambda}$ and $\mathbf R_{j}$ are constant, positive semi-definite, weight matrices. To obtain an appropriate scaling and avoid further manual tuning, these matrices are obtained from the quadratic approximation of the motion tracking cost~\eqref{eq:perc:motion_cost}, with respect to $\mbox{\boldmath $\lambda$}$ and $\dot{\mathbf q}_j$ respectively, at the nominal stance configuration of the robot.
All inequality constraints are handled through the penalty cost. In this work, we use relaxed barrier functions \cite{hauser2006barrier,feller2017stabilizing}. This penalty function is defined as a log-barrier on the interior of the feasible space and switches to a quadratic function at a distance $\delta$ from the constraint boundary.
\begin{equation}
\mathcal{B}(h)=
\begin{cases}
- \mu \ln(h) , & h \geq \delta, \\
\frac{\mu}{2}\left(\left(\frac{h - 2\delta}{\delta}\right)^2 - 1 \right) - \mu\ln(\delta), & h < \delta.
\end{cases}
\label{eq:perc:relaxed_barrier}
\end{equation}
The penalty is taken element-wise for vector-valued inequality constraints. The sum of all penalties is given as follows:
\begin{equation}
L_{\mathcal{B}} = \sum_{i=1}^{4} \mathcal{B}_j\left(\mathbf h^{j}_{i}\right) + \sum_{i \in \mathcal{C}} \mathcal{B}_t\left(\mathbf h^{t}_{i} \right) + \mathcal{B}_{\lambda}\left(h^{\lambda}_{i} \right) + \sum_{c \in \mathcal{D}} \mathcal{B}_d\left(h^{d}_{c} \right),
\end{equation}
with joint limit constraints $\mathbf h^{j}_{i}$ for all legs, foot placement and friction cones constraints, $\mathbf h^{t}_{i} $ and $h^{\lambda}_{i}$, for legs in contact, and collision avoidance constraints $h^{d}_{c}$ for all bodies in a set $\mathcal{D}$.
The joint limits constraints contain upper $\{ \overline{\mathbf q}_j$, $\overline{\dot{\mathbf q}}_j$ , $\overline{\mbox{\boldmath $\tau$}}\}$ and lower bounds $\{\underline{\mathbf q}_j$, $\underline{\dot{\mathbf q}}_j$, $\underline{\mbox{\boldmath $\tau$}}\}$ for positions, velocities, and torques:
\begin{equation}
\mathbf h_i^{j} = \begin{bmatrix}
\overline{\mathbf q}_j - \mathbf q_j \\
\mathbf q_j - \underline{\mathbf q}_j \\
\overline{\dot{\mathbf q}}_j - \dot{\mathbf q}_j \\
\dot{\mathbf q}_j - \underline{\dot{\mathbf q}}_j \\
\overline{\mbox{\boldmath $\tau$}} - \mbox{\boldmath $\tau$} \\
\mbox{\boldmath $\tau$} - \underline{\mbox{\boldmath $\tau$}}
\end{bmatrix} \geq \mathbf 0,
\label{eq:perc:jointLimits}
\end{equation}
where we approximate the joint torques by considering a static equilibrium in each leg, i.e.\ $\mbox{\boldmath $\tau$}_i = \mathbf J^\top_{j,i} \mbox{\boldmath $\lambda$}_i$.
The foot placement constraint is a set of linear inequality constraints in task space:
\begin{equation}
\mathbf h_i^{t} = \mathbf A_i^t \cdot \mathbf p_{i} + \mathbf b_i^t \geq \mathbf 0,
\label{eq:perc:foothold_position_constraint}
\end{equation}
where $\mathbf A_i^t \in \mathbb{R}^{m\times3}$, and $\mathbf b_i^t \in \mathbb{R}^{m}$ define $m$ half-space constraints in 3D. Each half-space is defined as the plane spanned by an edge of the 2D polygon and the surface normal of the touchdown terrain $\mathcal{F}_{T+}$. The polygon is obtained by initializing all $m$ vertices at the reference foothold derived in section ~\ref{sect:perc:reference_generation} and iteratively displacing them outwards. Each vertex is displaced in a round-robin fashion until it reaches the \new{boundary} of the segmented region or until further movement would cause the polygon to become non-convex.
Similar to \cite{deits2015computing}, we have favoured the low computational complexity of an iterative scheme over an exact approach of obtaining a convex inner approximation. The \new{first set of} extracted constraints remain unaltered for a foot that is in the second half of the swing phase to prevent last-minute jumps in constraints.
The friction cone constraint is implemented as:
\begin{equation}
h^{\lambda}_{i} = \mu_c F_z - \sqrt{F_x^2 + F_y^2 + \epsilon^2} \geq 0,
\label{eq:perc:cone}
\end{equation}
with $[F_x, F_y, F_z]^\top = \mathbf R_T^\top \mathbf R_B \mbox{\boldmath $\lambda$}_i$, defining the forces in the local terrain frame. $\mu_c$ is the friction coefficient, and $\epsilon > 0$ is a parameter that ensures a continuous derivative at $\mbox{\boldmath $\lambda$}_i = \mathbf 0$, and at the same time creates a safety margin~\cite{grandia2019feedback}.
The collision avoidance constraint is given by evaluation of the SDF at the center of a collision sphere, $\mathbf p_c$, together with the required distance given by the radius, $r_c$, and a shaping function $d_\text{min}(t)$.
\begin{equation}
h^{d}_{c} = d^{SDF}(\mathbf p_c) - r_c - d_\text{min}(t) \geq 0. \label{eq:perc:sdf_inequality}
\end{equation}
The primary use of the shaping function is to relax the constraint if a foot starts a swing phase from below the map. To avoid the robot using maximum velocity to escape the collision, we provide smooth guidance back to free space with a cubic spline trajectory. This happens when the perceived terrain is higher than the actual terrain, for example in case of a soft terrain like vegetation and snow, or simply because of drift and errors in the estimated map. The collision set $\mathcal{D}$ contains collision bodies for all knees and for all feet that are in swing phase, as visualized on the hind legs in Fig.~\ref{fig:perc:state_definition}.
Finally, we use a quadratic cost as the terminal cost in \eqref{eq:perc:mpc_cost}. To approximate the infinite horizon cost incurred after the finite horizon length, we solve a Linear Quadratic Regulator (LQR) problem for the linear approximation of the MPC model and quadratic approximation of the intermediate costs around the nominal stance configuration of the robot. The Riccati matrix $\mathbf S_{\text{LQR}}$ of the cost-to-go is used to define the quadratic cost around the reference state:
\begin{equation}
\Phi(\mathbf x) = \frac{1}{2} \left(\mathbf x - \mathbf x_{ref}(T) \right)^\top \mathbf S_{\text{LQR}} \left(\mathbf x - \mathbf x_{ref}(T)\right).
\end{equation}
\subsection{Equality constraints}
\label{sect:perc:equality_constraints}
For each foot in swing phase, the contact forces are required to be zero:
\begin{equation}
\mbox{\boldmath $\lambda$}_{i} = \mathbf{0}, \qquad \forall i \notin \mathcal{C}.
\end{equation}
Additionally, for each foot in contact, the end-effector velocity is constrained to be zero. For swing phases, the reference trajectory is enforced only in the normal direction. This ensures that the foot lifts off and touches down with a specified velocity while leaving complete freedom of foot placement in the tangential direction.
\begin{align*}
&\left\{
\begin{array}{ll}
\mathbf v_{i} = \mathbf 0, \quad &\text{if $i \in \mathcal{C}$}, \\
\mathbf n^\top(t) \left(\mathbf v_{i} - \mathbf v_{i,ref} + k_p (\mathbf p_{i} - \mathbf p_{i,ref})\right) = 0,&\text{if $i \notin \mathcal{C}$},
\end{array}
\right.
\end{align*}
The surface normal, $\mathbf n(t)$, is interpolated over time since liftoff and touchdown terrain can have a different orientation.
\section{Numerical Optimization}
\label{sect:perc:numerical_optimization}
\begin{algorithm}[tb]
\caption{Real-time iteration Multiple-shooting MPC}
\label{alg:SQP}
\footnotesize
\begin{algorithmic}[1]
\State \textbf{Given: } previous solution $\mathbf w_i$
\State Discretize the continous problem to the form of \eqref{eq:perc:NMPC} \label{alg:line:dt}
\State Compute the linear quadratic approximation \eqref{eq:perc:SQP_QPSubproblem} \label{alg:line:LQ}
\State Compute the equality constraint projection \eqref{eq:perc:eq_projection} \label{alg:line:proj}
\State $\delta\tilde{\mathbf w} \gets $ Solve the projected QP subproblem \eqref{eq:perc:SQP_ProjectedQPSubproblem} \label{alg:line:hpipmsolve}
\State $\delta\mathbf w \gets \mathbf P \delta\tilde{\mathbf w} + \mathbf p$, back substitution using \eqref{eq:perc:qp_subspace} \label{alg:line:backproj}
\State $\mathbf w_{i+1} \gets $ Line-Search($\mathbf w_i$, $\delta \mathbf w$), (Algorithm~\ref{alg:ls}) \label{alg:line:ls}
\end{algorithmic}
\end{algorithm}
We consider a direct multiple-shooting approach to transforming the continuous optimal control problem into a finite-dimensional nonlinear program (NLP) \cite{bock1984multiple}. Since MPC computes control inputs over a receding horizon, successive instances of \eqref{eq:perc:NMPC} are similar and can be efficiently warm-started when taking an SQP approach \new{by shifting the previous solution. For new parts of the shifted horizon, for which no initial guess exists, we repeat the final state of the previous solution and initialize the inputs with the references generated in section~\ref{sect:perc:reference_generation}}. Additionally, we follow the real-time iteration scheme where only one SQP step is performed per MPC update~\cite{diehl2002realtime}. In this way, the solution is improved across consecutive instances of the problem, rather than iterating until convergence for each problem.
As an overview of the approach described in the following sections, a pseudo-code is provided in Algorithm~\ref{alg:SQP}, referring to the relevant equations used at each step. Except for the solution of the QP in line~\ref{alg:line:hpipmsolve}, all steps of the algorithm are parallelized across the shooting intervals. The QP is solved using HPIPM~\cite{frison2020hpipm}.
\subsection{Discretization}
The continuous control signal $\mathbf u(t)$ is parameterized over subintervals of the prediction horizon $[t,t + T]$ to obtain a finite-dimensional decision problem. This creates a grid of nodes $k \in \{0, \shortdots, N\}$ defining control times $t_k$ separated by intervals of duration $\delta t \approx T/(N-1)$. Around gait transitions, $\delta t$ is slightly shortened or extended such that a node is exactly at the gait transition.
In this work, we consider a piecewise constant, or zero-order-hold, parameterization of the input. Denoting $\mathbf x_k = \mathbf x(t_k)$ and integrating the continuous dynamics in \eqref{eq:perc:mpc_full_dynamics} over an interval leads to a discrete time representation of the dynamics:
\begin{equation}
\mathbf f_k^d(\mathbf x_k, \mathbf u_k) = \mathbf x_k + \int_{t_k}^{t_k+\delta t} \mathbf f^c(\mathbf x(\tau), \mathbf u_k, t) \text{ d}\tau.
\label{eq:perc:discrete-dynamics}
\end{equation}
The integral in \eqref{eq:perc:discrete-dynamics} is numerically approximated with an integration method of choice to achieve the desired approximation accuracy of the evolution of the continuous time system under the zero-order-hold commands. We use an explicit second-order Runge-Kutta scheme.
The general nonlinear MPC problem presented below can be formulated by defining and evaluating a cost function and constraints on the grid of nodes.
\begin{subequations}
\label{eq:perc:NMPC} %
\begin{flalign}
&\underset{\begin{subarray}{c}
\mathbf X, \mathbf U
\end{subarray}}{\min}& \mathclap{\,\, \Phi(\mathbf x_N) + \sum_{k=0}^{N-1} l_k(\mathbf x_k, \mathbf u_k) ,} \\
\label{eq:perc:NMPC-ic}
&\quad\text{s.t.}& \mathbf x_0 - \hat{\mathbf x} &= \mathbf 0, \\
\label{eq:perc:NMPC-dyn}
&& \mathbf x_{k+1} - \mathbf f^d_k(\mathbf x_k, \mathbf u_k) &= \mathbf 0, &&k = 0,\shortdots,N\!-\!1,\\
&& \mathbf g_k(\mathbf x_k, \mathbf u_k) & = \mathbf 0, &&k = 0,\shortdots,N\!-\!1,
\end{flalign}
\end{subequations}
where $\mathbf X = [\mathbf x_0^\top, \dots \mathbf x_N^\top]^\top$, and $\mathbf U = [\mathbf u_0^\top, \dots \mathbf u_{N-1}^\top]^\top$, are the sequences of state and input variables respectively. The nonlinear cost and constraint functions $l_k$, and $\mathbf g_k$, are discrete sample of the continuous counterpart. Collecting all decision variables into a vector, $\mathbf w = [\mathbf X^\top, \mathbf U^\top]^\top$, problem \eqref{eq:perc:NMPC} can be written as a general NLP:
\begin{equation}
\underset{\mathbf w}{\min} \quad \phi(\mathbf w), \quad \text{s.t.} \quad
\begin{bmatrix}
\mathbf F(\mathbf w) \\
\mathbf G(\mathbf w)
\end{bmatrix} = \mathbf 0,
\label{eq:perc:NLP}
\end{equation}
where $\phi(\mathbf w)$ is the cost function, $\mathbf F(\mathbf w)$ is the collection of initial state and dynamics constraints, and $\mathbf G(\mathbf w)$ is the collection of all general equality constraints.
\subsection{Sequential Quadratic Programming (SQP)}
SQP based methods apply Newton-type iterations to Karush-Kuhn-Tucker (KKT) optimality conditions, assuming some regularity conditions on the constraints \cite{mangasarian1967fritz}. The Lagrangian of the NLP in \eqref{eq:perc:NLP} is defined as:
\begin{equation}
\mathcal{L}(\mathbf w, \mbox{\boldmath $\lambda$}_\mathbf G, \mbox{\boldmath $\lambda$}_\mathbf H) = \phi(\mathbf w) + \mbox{\boldmath $\lambda$}_\mathbf F^\top \mathbf F(\mathbf w) + \mbox{\boldmath $\lambda$}_\mathbf G^\top \mathbf G(\mathbf w),
\label{eq:perc:NLP_lagrangian}
\end{equation}
with Lagrange multipliers $\mbox{\boldmath $\lambda$}_\mathbf F$ and $\mbox{\boldmath $\lambda$}_\mathbf G$, corresponding to the dynamics and equality constraints. The Newton iterations can be equivalently computed by solving the following potentially non-convex QP \cite{nocedal2006numerical}:
\begin{subequations}
\label{eq:perc:SQP_QPSubproblem}
\begin{align}
\underset{\begin{subarray}{c}
\delta \mathbf w
\end{subarray}}{\min} & \quad
\nabla_\mathbf w \phi(\mathbf w_i)^\top \delta \mathbf w + \frac{1}{2} \delta \mathbf w^\top \mathbf B_i \delta \mathbf w, \label{eq:perc:SQP-qp-cost} & \\
\quad\text{s.t} & \quad \mathbf F(\mathbf w_i) + \nabla_\mathbf w \mathbf F(\mathbf w_i)^\top \delta \mathbf w = \mathbf 0, & \label{eq:perc:SQP-qp-dynconstr} \\
& \quad \mathbf G(\mathbf w_i) + \nabla_\mathbf w \mathbf G(\mathbf w_i)^\top \delta \mathbf w = \mathbf 0, & \label{eq:perc:SQP-qp-eqconstr}
\end{align}
\end{subequations}
where the decision variables, $\delta \mathbf w = \mathbf w - \mathbf w_i$, define the update step relative to the current iteration $\mathbf w_i$, and the Hessian $\mathbf B_i = \nabla^2_\mathbf w\mathcal{L}(\mathbf w_i,\mbox{\boldmath $\lambda$}_\mathbf F, \mbox{\boldmath $\lambda$}_\mathbf G)$. Computing the solution to \eqref{eq:perc:SQP_QPSubproblem} provides a candidate decision variable update, $\delta \mathbf w_i$, and updated Lagrange multipliers.
\subsection{Quadratic Approximation Strategy}
As we seek to deploy MPC on dynamic robotic platforms, it is critical that the optimization problem in \eqref{eq:perc:SQP_QPSubproblem} is well conditioned and does not provide difficulty to numerical solvers. In particular, when $\mathbf B_i$ in \eqref{eq:perc:SQP-qp-cost} is positive semi-definite (p.s.d), the resulting QP is convex and can be efficiently solved \cite{kouzoupis2018recent}.
To ensure this, an approximate, p.s.d Hessian is used instead of the full Hessian of the Lagrangian. For the tracking costs \eqref{eq:perc:motion_cost}, the objective function has a least-squares form in which case the Generalized Gauss-Newton approximation,
\begin{equation}
\nabla_\mathbf w^2 \left( \frac{1}{2}\|\mbox{\boldmath $\epsilon$}_i(\mathbf w)\|^2_{\mathbf W_i} \right) \approx \nabla_\mathbf w \mbox{\boldmath $\epsilon$}_i(\mathbf w)^\top \mathbf W_i \nabla_\mathbf w \mbox{\boldmath $\epsilon$}_i(\mathbf w),
\label{eq:perc:GaussNewton}
\end{equation}
proves effective in practice \cite{houska2011auto}. Similarly, for the soft constraints, we exploit to convexity of the penalty function applied to the nonlinear constraint \cite{verschueren2016exploiting}:
\begin{equation}
\nabla_\mathbf w^2 \left( \mathcal{B}(\mathbf h(\mathbf w)) \right) \approx \nabla_\mathbf w \mathbf h(\mathbf w)^\top \nabla_\mathbf h^2 \mathcal{B}(\mathbf h(\mathbf w)) \nabla_\mathbf w \mathbf h(\mathbf w),
\label{eq:perc:SCQP}
\end{equation}
where the diagonal matrix $\nabla_\mathbf h^2 \mathcal{B}(\mathbf h(\mathbf w))$ maintains the curvature information of the convex penalty functions. The contribution of the constraints to the Lagrangian in~\eqref{eq:perc:NLP_lagrangian} is ignored in the approximate Hessian since we do not have additional structure that allows a convex approximation.
\subsection{Constraint Projection}
The equality constraints in \ref{sect:perc:equality_constraints} were carefully chosen to have full row rank w.r.t. the control inputs, such that, after linearization, $\nabla_\mathbf w \mathbf G(\mathbf w_i)^\top$ has full row rank in \eqref{eq:perc:SQP-qp-eqconstr}. This means that the equality constraints can be eliminated before solving the QP through a change of variables~\cite{nocedal2006numerical}:
\begin{equation}
\delta\mathbf w = \mathbf P \delta\tilde{\mathbf w} + \mathbf p, \label{eq:perc:qp_subspace}
\end{equation}
where the linear transformation satisfies
\begin{equation}
\nabla_\mathbf w \mathbf G(\mathbf w_i)^\top\mathbf P = \mathbf 0, \quad \nabla_\mathbf w \mathbf G(\mathbf w_i)^\top \mathbf p = -\mathbf G(\mathbf w_i).
\label{eq:perc:eq_projection}
\end{equation}
After substituting \eqref{eq:perc:qp_subspace} into \eqref{eq:perc:SQP_QPSubproblem}, the following QP is solved w.r.t. $\delta\tilde{\mathbf w}$.
\begin{subequations}
\label{eq:perc:SQP_ProjectedQPSubproblem}
\begin{align}
\underset{\begin{subarray}{c}
\delta \tilde{\mathbf w}
\end{subarray}}{\min} & \quad
\nabla_{\tilde{\mathbf w}} \tilde{\phi}(\mathbf w_i)^\top \delta \tilde{\mathbf w} + \frac{1}{2} \delta \tilde{\mathbf w}^\top \tilde{\mathbf B}_i \delta \tilde{\mathbf w}, \label{eq:perc:SQP-pr-qp-cost} & \\
\quad\text{s.t} & \quad \tilde{\mathbf F}(\mathbf w_i) + \nabla_{\tilde{\mathbf w}} \tilde{\mathbf F}(\mathbf w_i)^\top \delta \tilde{\mathbf w} = \mathbf 0. & \label{eq:perc:SQP-pr-qp-dynconstr}
\end{align}
\end{subequations}
Because each constraint applies only to the variables at one node $k$, the coordinate transformation maintains the sparsity pattern of an optimal control problem and can be computed in parallel. Since this projected problem now only contains costs and system dynamics, solving the QP only requires one Ricatti-based iteration\cite{frison2020hpipm}. The full update $ \delta\mathbf w$ is then obtained through back substitution into \eqref{eq:perc:qp_subspace}.
\subsection{Line-Search}
To select an appropriate stepsize, we employ a line-search based on the filter line-search used in IPOPT \cite{wachter2006implementation}. In contrast to a line-search based on a merit function, where cost and constraints are combined to one metric, the main idea is to ensure that each update either improves the constraint satisfaction or the cost function. The constraint satisfaction $\theta(\mathbf w)$ is measured by taking the norm of all constraints scaled by the time discretization:
\begin{equation}
\theta(\mathbf w) = \delta t \left|\left| \begin{bmatrix}
\mathbf F(\mathbf w)^\top,
\mathbf G(\mathbf w)^\top
\end{bmatrix}^\top\right|\right|_2.
\label{eq:perc:constraint_metric}
\end{equation}
In case of high or low constraint satisfaction, the behavior is adapted: When the constraint is violated beyond a set threshold, $\theta_\text{max}$, the focus changes purely to decreasing the constraints; when constraint violation is below a minimum threshold, $\theta_\text{min}$, the focus changes to minimizing costs.
Compared to the algorithm presented in \cite{wachter2006implementation}, we remove recovery strategies and second-order correction steps, for which there is no time in the online setting. Furthermore, the history of iterates plays no role since we perform only one iteration per problem.
The simplified line-search as used in this work is given in Algorithm~\ref{alg:ls} and contains three distinct branches in which a step can be accepted. The behavior at high constraint violation is given by line~\ref{ls:line:max}, where a step is rejected if the new constraint violation is above the threshold and worse than the current violation. The switch to the low constraint behavior is made in line~\ref{ls:line:min}: if both new and old constraint violations are low and the current step is in a descent direction, we require that the cost decrease satisfies the Armijo condition in line~\ref{ls:line:armijo}. Finally, the primary acceptance condition is given in line~\ref{ls:line:dual}, where either a cost or constraint decrease is requested. The small constants $\gamma_\phi$, and $\gamma_\theta$ are used to fine-tune this condition with a required non-zero decrease in either quantity.
\begin{algorithm}[!t]
\footnotesize
\caption{Backtracking Line-Search}
\label{alg:ls}
\begin{algorithmic}[1]
\State \textbf{Hyperparameters:} $
\alpha_\text{min}=10^{-4},
\theta_\text{max}=10^{-2},
\theta_\text{min}=10^{-6},
\eta=10^{-4},
\gamma_\phi=10^{-6},
\gamma_\theta=10^{-6},
\gamma_\alpha=0.5
$
\State $\alpha\leftarrow 1.0$
\State $\theta_{k} \gets \theta(\mathbf w_i)$
\State $\phi_{k} \gets \phi(\mathbf w_i)$
\State Accepted $\gets$ False
\While{\textit{Not} Accepted and $\alpha\geq \alpha_{\text{min}}$}
\State $\theta_{i+1} \gets \theta(\mathbf w_i + \alpha\delta\mathbf w)$
\State $\phi_{i+1} \gets \phi(\mathbf w_i + \alpha\delta\mathbf w)$
\If {$\theta_{i+1} > \theta_{\text{max}}$ \label{ls:line:max}}
\If {$\theta_{i+1} < (1 - \gamma_\theta) \theta_i$ \label{ls:line:constraint}}
\State Accepted $\gets$ True
\EndIf
\ElsIf {$\text{max}(\theta_{i+1},\theta_i) < \theta_{\text{min}}$ and $\nabla \phi(\mathbf w_i)^\top \delta\mathbf w < 0 $ \label{ls:line:min} }
\If {$\phi_{i+1} < \phi_i + \eta \alpha\nabla \phi(\mathbf w_i)^\top \delta\mathbf w$} \label{ls:line:armijo}
\State Accepted $\gets$ True
\EndIf
\Else {}
\If {$\phi_{i+1} < \phi_i - \gamma_\phi \theta_i$ or $
\theta_{i+1} < (1 - \gamma_\theta) \theta_i
$ \label{ls:line:dual}}
\State Accepted $\gets$ True
\EndIf
\EndIf
\If {\textit{Not} Accepted}
\State $\alpha\leftarrow \gamma_\alpha \alpha$
\EndIf
\EndWhile
\If {Accepted}
\State $\mathbf w_{i+1} \gets \mathbf w_i + \alpha\delta\mathbf w$
\Else
\State $\mathbf w_{i+1} \gets \mathbf w_i$
\EndIf
\end{algorithmic}
\end{algorithm}
\section{Motion Execution}
\label{sect:perc:motion_execution}
The optimized motion planned by the MPC layer consists of contact forces and desired joint velocities. We linearly interpolate the MPC motion plan at the \SI{400}{\hertz} execution rate and apply the feedback gains derived from the Riccati backward pass to the measured state~\cite{grandia2019feedback}. The corresponding torso acceleration is obtained through \eqref{eq:perc:torso_dynamics}. The numerical derivative of the planned joint velocities is used to determine a feedforward joint acceleration. A high-frequency whole-body controller (WBC) is used to convert the desired acceleration tasks into torque commands~\cite{sentis2006wholebody,saab2013dynamic,bellicoso2016perception}. A generalized momentum observer is used to estimate the contact state~\cite{bledt2018contact}. Additionally, the estimated external torques are filtered and added to the MPC and WBC dynamics as described in~\cite{jenelten2021TAMOLS}. \new{We use the same filter setup as shown in Fig. 13. of \cite{jenelten2021TAMOLS}.}
\subsection{Event based execution}
Inevitably, the measured contact state will be different from the planned contact state used during the MPC optimization. In this case, the designed contact forces cannot be provided by the whole-body controller. We have implemented simple reactive behaviors to respond to this situation and provide feedback to the MPC layer.
In case there is a planned contact, but no contact is measured, we follow a downward \textit{regaining} motion for that foot. Under the assumption that the contact mismatch will be short, the MPC will start a new plan again from a closed contact state. Additionally, we propagate the augmented system in \eqref{eq:perc:aug_system} with the information that no contact force was generated, i.e. $\mathbf 0 \overset{!}{=} \mathbf C_{\lambda} \mathbf s_{\lambda} + \mathbf D_{\lambda} \boldsymbol{ \nu}_{\lambda}$. In this way, the MPC layer will generate contact forces that maintain the requested smoothness w.r.t. the executed contact forces.
When contact is measured, but no contact was planned, the behavior depends on the planned time till contact. If contact was planned to happen soon, the measured contact is sent to the MPC to generate the next plan from that early contact state. \new{In the meantime, the WBC maintains a minimum contact force for that foot.} If no upcoming contact was planned, the measured contact is ignored.
\subsection{Whole-body control}
The whole-body control (WBC) approach considers the full nonlinear rigid body dynamics of the system in \eqref{eq:perc:full_rigid_body_dynamics}, including the estimate of disturbance forces. Each task is formulated as an equality constraint, inequality constraint, or least-squares objective affine in the generalized accelerations, torques, and contact forces. While we have used a hierarchical resolution of tasks in the past~\cite{bellicoso2016perception}, in this work, we instead use a single QP and trade off the tracking tasks with weights. We found that a strict hierarchy results in a dramatic loss of performance in lower priority tasks when inequalities constraints are active. Additionally, the complexity of solving multiple QPs and null-space projections in the hierarchical approach is no longer justified with the high quality motion reference coming from the MPC.
The complete list of tasks is given in Table~\ref{tab:perc:controllertasks}. The first two blocks of tasks enforce physical consistency and inequality constraints on torques, forces, and joint configurations. The joint limit constraint is derived from an exponential Control Barrier Function (CBF) \cite{nguyen2016exponential} on the joint limits, $\underline{\mathbf q}_j \leq \mathbf q_j \leq \overline{\mathbf q}_j$, resulting in the following joint acceleration constraints:
\begin{align}
\ddot{\mathbf q}_j + (\gamma_1 + \gamma_2) \dot{\mathbf q}_j + \gamma_1 \gamma_2 (\mathbf q_j - \underline{\mathbf q}_j )\geq \mathbf 0, \\
-\ddot{\mathbf q}_j - (\gamma_1 + \gamma_2) \dot{\mathbf q}_j + \gamma_1 \gamma_2 (\overline{\mathbf q}_j - \mathbf q_j ) \geq \mathbf 0,
\end{align}
with scalar parameters $\gamma_1 > 0, \gamma_2 > 0$. These CBF constraints guarantee that the state constraints are satisfied for all time and under the full nonlinear dynamics of the system~\cite{ames2014control}.
For the least-square tasks, we track swing leg motion with higher weight than the torso reference. This prevents that the robot exploits the leg inertia to track torso references in underactuated directions, and it ensures that the foot motion is prioritized over torso tracking when close to kinematics limits. Tracking the contact forces references with a low weight regulates the force distribution in case the contact configuration allows for internal forces.
Finally, the torque derived from the whole-body controller, $\mbox{\boldmath $\tau$}_{\text{wbc}}\in\mathbb{R}^{12}$, is computed. To compensate for model uncertainty for swing legs, the integral of joint acceleration error with gain $K > 0$ is added to the torque applied to the system:
\begin{align}
\mbox{\boldmath $\tau$}_i &= \mbox{\boldmath $\tau$}_{i,\text{wbc}} - K \int_{t^{sw}_{0}}^t \left( \ddot{\mathbf q}_i - \ddot{\mathbf q}_{i,\text{wbc}} \right) \text{d}t, \\
&\new{= \mbox{\boldmath $\tau$}_{i,\text{wbc}} - K \left(\dot{\mathbf q}_i - \dot{\mathbf q}_i(t^{sw}_{0}) - \int_{t^{sw}_{0}}^t \ddot{\mathbf q}_{i,\text{wbc}} \text{d}t \right),} \label{eq:joint_integral_impl}
\end{align}
where $t^{sw}_{0}$ is the start time of the swing phase. \new{The acceleration integral can be implemented based on the measured velocity $\dot{\mathbf q}_i$ and the velocity at the start of the swing phase, $\dot{\mathbf q}_i(t^{sw})$, as shown in \eqref{eq:joint_integral_impl}. Futhermore, the feedback term is saturated to prevent integrator windup.} For stance legs, a PD term is added around the planned joint configuration and contact consistent joint velocity.
\begin{table}[bt]
\centering
\caption{Whole-body control tasks}
\begin{tabular}{|c|l|}
\hline
Type &Task \\
\hline
\multirow{2}{*}{$=$} & Floating base equations of motion. \\
& No motion at the contact points. \\
\hline
\multirow{3}{*}{$\geq$} & Torque limits. \\
& Friction cone constraint. \\
& Joint limit barrier constraint. \\
\hline
\multirow{3}{*}{$w_i^2 \| \cdot \|^2$} & Swing leg motion tracking ($w_i = 100.0$). \\
& Torso linear and angular acceleration ($w_i = 1.0$). \\
& Contact force tracking. ($w_i = 0.01$). \\
\hline
\end{tabular}
\label{tab:perc:controllertasks}
\end{table}
\section{Results}
\label{sect:perc:results}
ANYmal is equipped with either two dome shaped Robo-Sense bpearl LiDARs, mounted in the front and back of the torso, or with four Intel RealSense D435 depth cameras mounted on each side of the robot. Elevation mapping runs at \SI{20}{\hertz} on an onboard GPU (Jetson AGX Xavier). Control and state estimation are executed on the main onboard CPU (Intel i7-8850H,2.6 GHz, Hexa-core) at \SI{400}{\hertz}, asynchronously to the MPC optimization which is triggered at \SI{100}{\hertz}. Four cores are used for parallel computation in the MPC optimization. A time horizon of $T =$ \SI{1.0}{\second} is used with a nominal time discretization of $\delta t \approx$ \SI{0.015}{\second}, with a slight variation due to the adaptive discretization around gait transitions. Each multiple-shooting MPC problem therefore contains around $5000$ decision variables. Part (\textbf{A}) and (\textbf{B}) of perception pipeline in Fig.~\ref{fig:perc:perception_overview} are executed on a second onboard CPU of the same kind and provides the precomputed layers over Ethernet.
To study the performance of the proposed controller, we report results in different scenarios and varying levels of detail. \new{All perception, MPC, and WBC parameters remain constant throughout the experiments and are the same for simulation and hardware. An initial guess for these parameters was found in simulation, and we further fine-tuned them on hardware.} First, results for the perception pipeline in isolation are presented in section~\ref{sect:perc:results_perception}. Second, we validate the major design choices in simulation in section~\ref{sect:perc:results_simulation}. Afterward, the proposed controller is put to the test in challenging simulation, as well as hardware experiments in section~\ref{sect:perc:results_hardware}. All experiments are shown in the supplemental video\cite{video}. Finally, known limitations are discussed in section~\ref{sect:perc:results_limitations}.
\subsection{Perception Pipeline}
\label{sect:perc:results_perception}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth,trim=0 0 0 0,clip]{figures/demo_terrain.png}
\caption{Evaluation of the plane segmentation on a demo terrain \cite{fankhauser2016universal}. The shown map has a true size of \SI{20}{}$\times$\SI{20}{}$\times$\SI{1}{\meter} with a resolution of \SI{4}{\centi\meter}. Top left shows the elevation map with additive uniform noise of $\pm$\SI{2}{\centi\meter} plus Gaussian noise with a standard deviation of \SI{2}{\centi\meter}. Top right shows the map after inpainting, filtering, steppability classification, and plane segmentation. Below, four areas of interest are shown. Their original location in the map is marked in the top right image.}
\label{fig:perc:demo_terrain}
\end{figure}
The output of the steppability classification and plane segmentation (part A in Fig.~\ref{fig:perc:perception_overview}) for a demo terrain is shown in Fig.~\ref{fig:perc:demo_terrain}. This terrain is available as part of the gridmap library and contains a collection of slopes, steps, curvatures, rough terrain, and missing data. The left middle image shows that slopes and steps are, in general, well segmented. In the bottom right image, one sees the effect of the plane segmentation on a curved surface. In those cases, the terrain will be segmented into a collection of smaller planes. Finally, the rough terrain sections shown in the right middle and bottom left image show that the method is able to recognize such terrain as one big planar section as long as the roughness is within the specified tolerance. These cases also show the importance of allowing holes in the segmented regions, making it possible to exclude just those small regions where the local slope or roughness is outside the tolerance. A global convex decomposition of the map would result in a much larger amount of regions.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\columnwidth,trim=0 0 0 0,clip]{figures/sdf_computation.pdf}
\caption{Computation time for constructing and querying the signed distance field. Submaps of the terrain in Fig.~\ref{fig:perc:demo_terrain} are used. \textit{SDF size} on the horizontal axis denotes the total amount of data points in the SDF (width$\times$length$\times$height). The query time is reported for the total of $10^3$ random queries for the interpolated value and derivative.}
\label{fig:perc:sdf_computation}
\end{figure}
The computation time for the construction and querying of the signed distance field is benchmarked on sub-maps of varying sizes extracted from the demo map, see Fig.~\ref{fig:perc:sdf_computation}. As expected, the construction time scales linearly with the SDF size, and the query time is constant with a slight increase when the memory size exceeds a cache level. During runtime, the local SDF size is typically below $10^5$ voxels, resulting in a computation time well below \SI{10}{\milli\second}. Together with the map update rate of \SI{20}{\hertz}, the proposed method provides the SDF at an order of magnitude faster than methods that maintain a general 3D voxel grid, with update rated reported around \SI{1}{\hertz}\cite{pankert2020perceptive}. Per MPC iteration, around $10^3$ SDF queries are made, making the SDF query time negligible compared to the total duration of one MPC iteration.
\subsection{Simulation}
\label{sect:perc:results_simulation}
\subsubsection{Collision avoidance}
To highlight the importance of considering knee collisions with the terrain, the robot is commanded to traverse a box of \SI{35}{\centi\meter} with a trotting gait at \SI{0.25}{\meter\per\second}. Fig.~\ref{fig:perc:box_climb_knee_col} compares the simulation result of this scenario with and without the knee collisions considered. The inclusion of knee collision avoidance is required to successfully step up the box with the hind legs. As shown in the figure, the swing trajectories are altered. Furthermore, the base pose and last stepping location before stepping up are adjusted to prepare for the future, showing the benefit of considering all degrees of freedom in one optimization. \new{Similarly, on the way down, the foothold optimization (within constraints) allows that the feet are placed away from the step, avoiding knee collisions while stepping down.}
\begin{figure}[!tb]
\centering
\begin{minipage}{.49\columnwidth}
\centering
\includegraphics[width=\linewidth,trim=550 300 430 150,clip]{figures/box_climb_knee_col.png}
\end{minipage}
\begin{minipage}{.49\columnwidth}
\centering
\includegraphics[width=\linewidth,trim=550 300 430 150,clip]{figures/box_climb_no_knee_col.png}
\end{minipage}
\caption{ANYmal stepping up a box of \SI{35}{\centi\meter}. Left: Without considering knee collisions. Right: Knee collision included in the optimization.}
\label{fig:perc:box_climb_knee_col}
\end{figure}
Fig.~\ref{fig:perc:box_climb_solver} provides insight into the solver during the motion performed with the knee collisions included. The four peaks in the cost function show the effect of the collision avoidance penalty when the legs are close to the obstacle during the step up and step down. Most of the time, the step obtained from the QP subproblem is accepted by the line-search with the full stepsize of $1.0$. However, between \SI{7}{} and \SI{8}{\second} the stepsize is decreased to prevent the constraint violation from further rising. This happens when the front legs step down the box and are close to collision. In those cases, the collision avoidance penalty is highly nonlinear, and the line-search is required to maintain the right balance between cost decrease and constraint satisfaction. We note that the line-search condition for low constraint violation is typically not achieved when using only one iteration per MPC problem.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/box_climb_solver-cropped.pdf}
\caption{Solver status during the box traversal motion (including knee collision avoidance). The first and second plots show the total cost, and constraint violation according to~\eqref{eq:perc:constraint_metric}, after each iteration. The bottom plot shows the stepsize and the line-search branch that led to the step acceptance. `Constraint` refers to a step accepted in the high constraint violation branch in line~\ref{ls:line:max} of Algorithm \ref{alg:ls}, \new{`Cost OR Constraint`} refers to the branch where either cost or constraint decrease is accepted in line~\ref{ls:line:dual}. \new{Note that the low constraint violation branch, line~\ref{ls:line:min}, did not occur in this experiment.}}
\label{fig:perc:box_climb_solver}
\end{figure}
\subsubsection{Model selection}
In the same scenario, we compare the performance of the proposed dynamics for the base with those of the commonly used single rigid body dynamics (SRBD). To be precise, the torso dynamics in \eqref{eq:perc:torso_dynamics} are evaluated at a constant nominal joint configuration and with zero joint velocities, while the rest of the controller remains identical. When using the SRBD, the model does not describe the backward shift in the center of mass location caused by the leg configuration. The result is that the controller with the SRBD model has a persisting bias that makes the robot almost tip over during the step up. \new{This model error is quantified in Fig.~\ref{fig:perc:com_comparison}. At \SI{30}{\degree} pitch angle, there is a center of mass error of \SI{2.6}{\centi\meter}, resulting in a bias of \SI{13.3}{\newton\meter} at the base frame. For reference, this is equivalent to an unmodelled payload of \SI{3.6}{\kilo\gram} at the tip of the robot.} The proposed model fully describes the change in inertia and center of mass location and therefore does not have any issue to predict the state trajectory during the step up motion.
\begin{figure}[!tb]
\centering
\includegraphics[width=\columnwidth]{figures/com_plot.pdf}
\caption{\new{The location of the center of mass (CoM) in heading direction for various torso pitch angles. The first set of CoM locations is evaluated with the \textit{true} joint angles, which are obtained when aligning the legs with the gravity direction as in the top image. This corresponds to the reference in section \ref{sect:perc:reference_generation}, which is tracked by the MPC. The second set of CoM locations is evaluated for the \textit{default} joint angles, shown in the bottom image, as assumed by the SRBD model.}}
\label{fig:perc:com_comparison}
\end{figure}
\new{
\subsubsection{Solver Comparison}
To motivate our choice to implement a multiple-shooting solver and move away from the DDP-based methods used in previous work, we compare both approaches on flat terrain and the stepping stone scenario shown in Fig.~\ref{fig:perc:stepping_stones_sim}. In particular, we compare against iLQR~\cite{tassa2012synthesis} and implement it with the same constraint projection, line-search, and the Riccati Backward pass of HPIPM as described in section~\ref{sect:perc:numerical_optimization}. The key difference between the algorithms lies in the update step. For multiple-shooting, we update both state and inputs directly: $\mathbf u^+_k = \mathbf u_k + \alpha\delta\mathbf u_k$, $\mathbf x^+_k = \mathbf x_k + \alpha\delta\mathbf x_k$. In contrast, iLQR proceeds with a line-search over closed-loop nonlinear \textit{rollouts} of the dynamics:
\begin{align}
\mathbf u^+_k &= \mathbf u_k + \alpha \mathbf k_k + \mathbf K_k \left(\mathbf x^+_k - \mathbf x_k \right), \\
\mathbf x^+_{k+1} &= \mathbf f^d_k(\mathbf x^+_k, \mathbf u^+_k), \quad\qquad \mathbf x^+_0 = \hat{\mathbf x},
\end{align}
where $\mathbf K_k$ is the optimal feedback gain obtained from the Riccati Backward pass and $\mathbf k_k = \delta \mathbf u_k - \mathbf K_k \delta \mathbf x_k $ is the control update. Due to this inherently single-threaded process, each line-search for iLQR takes four times as long as for the multi-threaded multiple-shooting. However, note that with the hybrid multiple-shooting-iLQR variants in \cite{giftthaler2018family} this difference vanishes.
Table~\ref{tab:perc:solver_comparison} reports the solvers' average cost, dynamics constraint violation, and equality constraint violation for a trotting gait in several scenarios. As a baseline, we run the multiple-shooting solver until convergence (with a maximum of 50 iterations) instead of real-time iteration. To test the MPC in isolation, we use the MPC dynamics as the simulator and apply the MPC input directly. Because of the nonlinear rollouts of iLQR, dynamics constraints are always satisfied, and iLQR, therefore, has the edge over multiple-shooting on this metric. However, as the scenario gets more complex and the optimization problem becomes harder, there is a point where the forward rollout of iLQR is unstable and diverges. For the scenario shown in Fig.~\ref{fig:perc:stepping_stones_sim}, this happens in the place where the robot is forced to take a big leap at the \SI{63}{\percent} mark and at the \SI{78}{\percent} mark where the hind leg is close to singularity as the robot steps down. The continuous time variant SLQ~\cite{farshidian2017efficient} fails in similar ways. These failure cases are sudden, unpredictable, and happen regularly when testing on hardware, where imperfect elevation maps, real dynamics, and disturbances add to the challenge. The absence of long horizon rollouts in the multiple-shooting approach makes it more robust and better suited for the scenarios shown in this work. For cases where both solvers are stable, we find that the small dynamics violation left with multiple-shooting in a real-time iteration setting does not translate to any practical performance difference on hardware.
Finally, even for the most challenging scenario, multiple-shooting with real-time iteration remains within \SI{10}{\percent} cost of the baseline.
}
\begin{figure*}[!bt]
\centering
\includegraphics[width=\linewidth,trim={0 0 0 0},clip]{figures/stepping_stones_sim.png}
\caption{\new{ANYmal traversing stepping stones in simulation (right to left). The resulting state trajectories for feet and torso, and the snapshots are shown for a traversal with the multiple-shooting solver and a trotting gait at \SI{0.75}{\meter\per\second}. The marked \SI{63}{\percent} and \SI{78}{\percent} locations indicate where the alternative solver, iLQR, diverges for \SI{0.5}{\meter\per\second} and \SI{0.75}{\meter\per\second}, respectively.}}
\label{fig:perc:stepping_stones_sim}
\end{figure*}
\begin{table}[tb]
\caption{Solver comparison on flat terrain and stepping stones. The Baseline iterates until convergence instead of using real-time iteration.}
\label{tab:perc:solver_comparison}
\begin{tabular}{lrrr}
\hline
\multicolumn{1}{l}{} & \multicolumn{1}{c}{Baseline}\rule{0pt}{14pt} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Multiple\\ shooting\end{tabular}}& \multicolumn{1}{c}{iLQR} \\
\hline
\multicolumn{1}{l}{\textit{Flat - \SI{0.50}{\meter/\second}}}\rule{0pt}{12pt} & & & \\
Cost & $ 54.19$ & $ 54.17$ & $54.18 $ \\
Dynamics Constr. & $ 1.70\times 10^{-7}$ & $3.41\times 10^{-3}$ & $0.0 $ \\
Equality Constr. & $ 2.28\times 10^{-6}$ & $3.51\times 10^{-3}$ & $3.51 \times 10^{-3}$ \\
\multicolumn{1}{l}{\textit{Stones - \SI{0.25}{\meter/\second}}}\rule{0pt}{12pt} & & & \\
Cost & $ 151.92$ & $ 156.22$ & $ 156.69$ \\
Dynamics Constr. & $ 3.58 \times 10^{-5}$ & $1.01\times 10^{-2}$ & $0.0$ \\
Equality Constr. & $ 7.24 \times 10^{-4}$ & $2.28\times 10^{-2}$ & $2.14\times 10^{-2}$ \\
\multicolumn{1}{l}{\textit{Stones - \SI{0.50}{\meter/\second}}}\rule{0pt}{12pt} & & & \\
Cost & $ 155.06 $ & $ 165.72 $ & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}diverged at $78\%$ \\ scenario progress\end{tabular}} \\
Dynamics Constr. & $ 2.18\times 10^{-5}$ & $1.53\times 10^{-2}$ & \\
Equality Constr. & $ 3.93\times 10^{-4}$ & $3.82\times 10^{-2}$ & \\
\multicolumn{1}{l}{\textit{Stones - \SI{0.75}{\meter/\second}}}\rule{0pt}{12pt} & & & \\
Cost & $ 199.49$ & $215.98 $ & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}diverged at $63\%$ \\ scenario progress\end{tabular}} \\
Dynamics Constr. & $ 1.20\times 10^{-4}$ & $2.36\times 10^{-2}$ & \\
Equality Constr. & $ 1.29\times 10^{-3}$ & $5.61\times 10^{-2}$ & \\
\hline
\end{tabular}
\end{table}
\subsubsection{Contact feedback} The reactive behavior under a mismatch in planned and sensed contact information is shown in the accompanying video. First, the sensed terrain is set to be \SI{10}{\centi\meter} above the actual terrain, causing a late touchdown. Afterward, the sensed terrain is set \SI{5}{\centi\meter} below the actual terrain, causing an early touchdown. The resulting vertical foot velocity for both cases is overlayed and plotted in Fig.~\ref{fig:perc:touchdown_vel}. For the case of a late touchdown, the reactive downward accelerating trajectory is triggered as soon as it is sensed that contact is absent. For the early touchdown case, there is a short delay in detecting that contact has happened, but once contact is detected, the measured contact is included in the MPC and the new trajectory is immediately replanned from the sensed contact location.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.7\columnwidth]{figures/touchdown_vel_with_text.pdf}
\caption{Desired and measured vertical foot velocity for the early and late touchdown scenarios shown in the accompanying video. The vertical line at \SI{0.5}{\second} indicated the planned touchdown time.}
\label{fig:perc:touchdown_vel}
\end{figure}
\begin{figure*}[!bt]
\centering
\includegraphics[width=\linewidth,trim={250 310 200 290},clip]{figures/flying_panorama.png}
\caption{ANYmal traversing an obstacle course in simulation \new{(left to right)}. Snapshots are shown for a traversal with a trotting gait at \SI{0.8}{\meter\per\second}. The MPC predictions are shown for each foot and for the torso center. For all contact phases within the horizon, the convex foot placements contraints are visualized.}
\label{fig:perc:obstacle_course}
\end{figure*}
\subsubsection{Stairs}
The generality of the approach with respect to the gait pattern is demonstrated in the accompanying video by executing a trot at \SI{0.25}{\meter\per\second}, a pace at \SI{0.3}{\meter\per\second}, a dynamic walk at \SI{0.25}{\meter\per\second}, and a static walk at \SI{0.2}{\meter\per\second} on a stairs with \SI{18.5}{\centi\meter} rise and \SI{24}{\centi\meter} run. Depending on the particular gait pattern and commanded velocity the method autonomously decides to progress, repeat, or skip a step. Note that there are no parameters or control modes specific to the gait or the stair climbing scenario. All motions emerge automatically from the optimization of the formulated costs and constraints.
\subsubsection{Obstacle course} The controller is given a constant forward velocity command on a series of slopes, gaps, stepping stones, and other rough terrains. We traverse the terrain with a pace at \SI{0.4}{\meter\per\second}, and a fast trotting gait with flight phase at \SI{0.8}{\meter\per\second}. Fig.~\ref{fig:perc:obstacle_course} shows the obstacle course and snapshots of the traversal with the fast trot. The supplemental video shows the planned trajectories for the feet together with the convex foothold constraints. In the right side of the screen, a front view is shown together with the elevation map and plane segmentation below. The slower gaits used in the previous section are able to complete the scenario as well, but their video is excluded as they take long to reach the end.
Finally, a transverse gallop gait is demonstrated on a series of gaps. Due to the torque limitations of the system and friction limits up the slope, this gait is not feasible on the more complex obstacle course.
\subsubsection{Comparison against RL}
We compare our method against a perceptive RL-based controller~\cite{miki2022learning} in the same obstacle course. We adapt the gait pattern of our controller to match the nominal gait used by the learned controller. The video shows that the learning-based controller can cross the unstructured terrain at the beginning and end of the obstacle course. However, it fails to use the perceptive information fully and falls between the stepping stones when starting from the left and off the narrow passage when starting from the right. \new{While the RL controller was not specifically trained on stepping stones,} this experiment highlights that current RL-based locomotion results in primarily reactive policies and struggles with precise coordination and planning over longer horizons. In contrast, using a model and online optimization along a horizon makes our proposed method generalize naturally to these more challenging terrains.
\subsection{Hardware}
\label{sect:perc:results_hardware}
\begin{figure*}[!tb]
\centering
\begin{minipage}{\linewidth}
\centering
\includegraphics[width=\linewidth,trim={0 60 0 180},clip]{figures/table_climbing_real.jpg}
\end{minipage}
\begin{minipage}{\linewidth}
\centering
\includegraphics[width=\linewidth,trim={0 60 0 180},clip]{figures/table_climbing_rviz.jpg}
\end{minipage}
\caption{Hardware experiment where ANYmal traverses a ramp, gap, and large step (from right to left). The bottom row shows the filtered elevation map, the foot trajectories over the MPC horizon, and the convex foothold constraints.}
\label{fig:perc:tableclimbing}
\end{figure*}
\begin{figure}[!tb]
\centering
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[width=\linewidth,trim={100 60 200 0},clip]{figures/stepping_stones_real.jpg}
\end{minipage}%
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[width=\linewidth,trim={150 150 300 0},clip]{figures/stepping_stones_rviz.jpg}
\end{minipage}
\caption{Hardware experiment where ANYmal walks on top of uneven stepping stones. Each wooden block has an area of \SI{20}{}x\SI{20}{\centi\meter} and each level of stepping stones is \SI{20}{\centi\meter} higher than the previous one. The right image shows the filtered elevation map, the foot trajectories over the MPC horizon, and the convex foothold constraints.}
\label{fig:perc:steppingstones}
\end{figure}
\subsubsection{Obstacle course}
The obstacle course simulation experiment is recreated on hardware in two separate experiments. First, we tested a sequence of a ramp, gap, and high step as shown in Fig.~\ref{fig:perc:tableclimbing}. During the middle section of this experiment, the robot faces all challenges simultaneously: While the front legs are stepping up to the final platform, the hind legs are still dealing with the ramp and gap. In a second scenario, the robot is walking on a set of uneven stepping stones, as shown in Fig.~\ref{fig:perc:steppingstones}. The main challenge here is that the planes on the stepping stones are small and do not leave much room for the MPC to optimize the footholds. We found that in this scenario, the inclusion of the kinematics and reactive foothold offset during the plane selection as described in section~\ref{sect:perc:reference_generation} are important. A remaining challenge here is that our plane segmentation does not consider consistency over time. In some cases, the small foothold regions on top of stepping stones might appear and disappear as feasible candidates. The supplemental video shows how in this case the planned foot trajectory can fail, and the reactive contact regaining is required to save the robot.
\begin{table}[!tb]
\caption{Computation times per map update and MPC iteration}
\label{tab:perc:computation_times}
\centering
\begin{tabular}{r|rrr}
& Mean [\SI{}{\milli\second}] & Max [\SI{}{\milli\second}] \\ \hline
Classification \& Segmentation & 38.8 & 76.6 \\
Signed distance field & 1.3 & 7.6 \\ \hline
LQ approximation & 3.6 & 6.2 \\
QP solve & 2.7 & 4.4 \\
Line-search & 0.3 & 0.9 \\ \hline
MPC iteration & 6.6 & 9.8 \\
\end{tabular}
\end{table}
Computation times are reported in Table~\ref{tab:perc:computation_times}. Per map update, most time is spent on terrain classification and plane segmentation. More specifically, the RANSAC refinement takes the most time and can cause a high worst-case computation due to its sampling-based nature. On average, the perception pipeline is able to keep up with the \SI{20}{\hertz} map updates.
For the MPC computation time, the `LQ approximation` contains the parallel computation of the linear-quadratic model and equality constraint projection (Algorithm~\ref{alg:SQP}, line \ref{alg:line:dt} till \ref{alg:line:proj}). `QP solve` contains the solution of the QP and the back substitution of the solution (Algorithm~\ref{alg:SQP}, line \ref{alg:line:hpipmsolve} and \ref{alg:line:backproj}). Despite the parallelization across four cores, evaluating the model takes the majority of the time, with the single core solving of the QP in second place. On average, the total computation time is sufficient for the desired update rate of \SI{100}{\hertz}. The worst-case computation times are rare, and we hypothesize that they are mainly caused by variance in the scheduling of the numerous parallel processes on the robot. For the line-search, the relatively high maximum computation time is attained when several steps are rejected, and the costs and constraints need to be recomputed.
\begin{figure}[!tb]
\centering
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[width=\linewidth,trim={400 100 300 0},clip]{figures/outdoor_stairs_real.jpg}
\end{minipage}%
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[width=\linewidth,trim={300 55 400 0},clip]{figures/outdoor_stairs_rviz.jpg}
\end{minipage}
\caption{Hardware experiment where ANYmal walks up and down outdoor stairs with a \SI{16}{\centi\meter} rise and \SI{29.5}{\centi\meter} run. The right image shows the filtered elevation map, the foot trajectories over the MPC horizon, and the convex foothold constraints.}
\label{fig:perc:outdoor_stairs}
\end{figure}
\subsubsection{Stairs} We validate the stair climbing capabilities on 2-step indoor stairs and on outdoor stairs. Fig.~\ref{fig:perc:outdoor_stairs} shows the robot on its way down the outdoor stairs. For these experiments, we obtain the elevation map from~\cite{hoeller2022neural}. With its learning-based approach, it provides a high quality estimate of the structure underneath the robot. \new{Note that this module only replaces the source of the elevation map in Fig.~\ref{fig:perc:perception_overview}, and does not change the rest of our perception pipeline. Fig.~\ref{fig:perc:stairs_velocities} and \ref{fig:perc:stairs_torques} show the measured joint velocities and torques alongside the same quantities within the MPC solution for five strides of the robot walking up the stairs. The optimized MPC values are within the specified limits and close to the measured values.}
\begin{figure}[!tb]
\centering
\includegraphics[width=1.0\columnwidth]{figures/stairs_velocities.pdf}
\caption{\new{Measured and MPC commanded joint velocities for the left front leg while walking up the stairs shown in Fig~\ref{fig:perc:outdoor_stairs}. All joints, Hip Abduction Aduction (HAA), Hip Flexion Extension (HFE), and Knee Flextion Extension (KFE), have a velocity limit of $\pm$\SI{7.5}{\radian\per\second}.}}
\label{fig:perc:stairs_velocities}
\end{figure}
\begin{figure}[!tb]
\centering
\includegraphics[width=1.0\columnwidth]{figures/stairs_torques.pdf}
\caption{\new{Measured torque and approximated torque within the MPC formulation $(\mbox{\boldmath $\tau$}_i = \mathbf J^\top_{j,i} \mbox{\boldmath $\lambda$}_i)$ for the left front leg while walking up the stairs shown in Fig~\ref{fig:perc:outdoor_stairs}. All joints, Hip Abduction Aduction (HAA), Hip Flexion Extension (HFE), and Knee Flextion Extension (KFE), have a torque limit of $\pm$\SI{80}{\newton\meter}.}}
\label{fig:perc:stairs_torques}
\end{figure}
\subsection{Limitations}
\label{sect:perc:results_limitations}
A fundamental limitation in the proposed controller is that the gait pattern is externally given and only adapted during early and late touchdown. Strong adverse disturbances, for example, in the direction of a foot that will soon lift, can make the controller fail. A change in the stepping pattern could be a much better response in such cases. Together with the reactive behaviors during contact mismatch, which are currently hardcoded, we see the potential for reinforcement learning-based methods as a tracking controller to add to the robustness during execution.
Closely related to that, the current selection of the segmented plane and, therefore, the resulting foothold constraints happens independently for each leg. In some cases, this can lead to problems that could have been avoided if all legs were considered simultaneously. For example, while walking up the stairs sideways, all feet can end up on the same tread, leading to fragile support and potential self-collisions. \new{Similarly, the presented method targets local motion planning and control, and we should not expect global navigation behavior. The current approach will attempt to climb over gaps and obstacles if so commanded by the user and will not autonomously navigate around them.}
As with all gradient-based methods for nonlinear optimization, local optima and infeasibility can be an issue. With the simplification of the terrain to convex foothold constraints and by using a heuristic reference motion in the cost function, we have aimed to minimize such problems. Still, we find that in the case of very thin and tall obstacles, the optimization can get stuck. Fig.~\ref{fig:perc:hurdles_stuck} shows an example where the foothold constraints lie behind the obstacle and the reference trajectory correctly clears the obstacle. Unfortunately, one of the feet in the MPC trajectory goes right through the obstacle. Because all SDF gradients are horizontal at that part of the obstacle, there is no strong local hint that the obstacle can be avoided. For future work, we can imagine detecting such a case and triggering a sampling-based recovery strategy to provide a new, collision-free initial guess. Alternatively, recent learning-based initialization could be employed \cite{melon2021receding,lembono2020learning}.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.70\columnwidth,trim={0 0 0 0},clip]{figures/hurdles_stuck.png}
\caption{Example of the MPC optimization being stuck inside a tall and thin structure of \SI{5}{\centi\meter} width and \SI{20}{\centi\meter} height. The feet reference trajectories used as part of the cost function are visualized as a sequence of arrows. }
\label{fig:perc:hurdles_stuck}
\end{figure}
\new{Finally, we show a gallop and trot with flight phases at the end of the video. For these motions, the perceptive information is turned off, and the robot estimates the ground plane through a history of contact points. It demonstrates that the presented MPC is ready to express and stabilize these highly dynamic motions. Unfortunately, the elevation map is not usable due to artefacts from impacts and state estimation drift.}
\section{Conclusion}
\label{sect:perc:conclusion}
In this work, we proposed a controller capable of perceptive and dynamic locomotion in challenging terrain. By formulating perceptive foot placement constraints through a convex inner approximation of steppable terrain, we obtain a nonlinear MPC problem that can be solved reliably and efficiently with the presented numerical strategy. Steppability classification, plane segmentation, and an SDF are all precomputed and updated at \SI{20}{\hertz}. Asynchronously precomputing this information minimizes the time required for each MPC iteration and makes the approach real-time capable. Furthermore, by including the complete joint configuration in the system model, the method can simultaneously optimize foot placement, knee collision avoidance, and underactuated system dynamics. With this rich set of information encoded in the optimization, the approach discovers complex motions autonomously and generalizes across various gaits and terrains that require precise foot placement and whole-body coordination.
|
{
"timestamp": "2022-08-18T02:17:56",
"yymm": "2208",
"arxiv_id": "2208.08373",
"language": "en",
"url": "https://arxiv.org/abs/2208.08373"
}
|
"\\section{Introduction}\n\n\n\nThe simple exclusion process is among the most investigated interac(...TRUNCATED)
| {"timestamp":"2022-08-18T02:15:41","yymm":"2208","arxiv_id":"2208.08306","language":"en","url":"http(...TRUNCATED)
|
"\n\\section{Introduction}\\label{sec:introduction}}\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\(...TRUNCATED)
| {"timestamp":"2022-08-18T02:16:56","yymm":"2208","arxiv_id":"2208.08349","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nMechanical metamaterials have extrinsic properties that transcend conventi(...TRUNCATED)
| {"timestamp":"2022-08-18T02:14:38","yymm":"2208","arxiv_id":"2208.08285","language":"en","url":"http(...TRUNCATED)
|
"\\section*{Introduction}\n\nAlthough artificial intelligence (AI) has been flourishing over the pas(...TRUNCATED)
| {"timestamp":"2022-08-18T02:13:28","yymm":"2208","arxiv_id":"2208.08263","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nFor an integer sequence $\\{a_n\\}_{n\\ge0}$ ($a_n\\ge1$ for $n\\ge1$),\n\(...TRUNCATED)
| {"timestamp":"2022-08-25T02:13:18","yymm":"2208","arxiv_id":"2208.08347","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\n\n\\subsection{Motivation}\n\nHydrodynamic turbulence is omnipresent in (...TRUNCATED)
| {"timestamp":"2022-08-18T02:14:49","yymm":"2208","arxiv_id":"2208.08290","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\n\nSince 2009 when Bitcoin \\cite{nakamoto2008bitcoin} was first introduc(...TRUNCATED)
| {"timestamp":"2022-08-25T02:18:32","yymm":"2208","arxiv_id":"2208.08254","language":"en","url":"http(...TRUNCATED)
|
"\n\n\n\\section{Introduction}\n\\label{Section: Introduction}\n\n\nMany practical engineering syste(...TRUNCATED)
| {"timestamp":"2022-08-18T02:15:29","yymm":"2208","arxiv_id":"2208.08304","language":"en","url":"http(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 4