Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 17
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 42417)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 17
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}
While modern deep learning solves several challenging tasks successfully, a series of recent works \citep{geirhos2018imagenet, gururangan2018annotation, feldman2015certifying} have reported that the high accuracy of deep networks on in-distribution samples does not always guarantee low test error on out-of-distribution (OOD) samples, especially in the context of spurious correlations. \citet{arjovsky2019invariant, nagarajan2020understanding, tsipras2018robustness} suggest that the deep networks can be potentially biased to the spuriously correlated attributes, or dataset bias, which are misleading statistical heuristics that are closely correlated but not causally related to the target label. In this regard, several recent works explain this phenomenon through the lens of simplicity bias \citep{rahaman2019spectral, neyshabur2014search, shah2020pitfalls} of gradient descent-based deep networks optimization; deep networks prefer to rely on spurious features which are more ``simpler" to learn, e.g., more linear.
{These catastrophic pitfalls of dataset bias have facilitated the development of debiasing methods, which can be roughly categorized into approaches (1) leveraging annotations of spurious attributes, i.e., bias label \citep{kim2019learning, sagawa2019distributionally, wang2020towards, tartaglione2021end}, (2) presuming specific type of bias, e.g., color and texture \citep{bahng2020learning, wang2019learning, ge2021robust} or (3) without using explicit kinds of supervisions on dataset bias \citep{liu2021just, nam2020learning, lee2021learning, levy2020large, zhang2022correct}.}
{While substantial technical advances have been made in this regard, these approaches still fail to address the open problem: how to train a debiased classifier by fully exploiting unlabeled samples lacking \textit{both} bias and target label.}
More specifically, while the large-scale unlabeled dataset can be potentially biased towards spuriously correlated sensitive attributes, e.g., ethnicity, gender, or age \citep{abid2021large, agarwal2021evaluating}, most existing debiasing frameworks are not designed to deal with this unsupervised settings.
Moreover, recent works on self-supervised learning have reported that self-supervised learning may still suffer from poor OOD generalization \citep{geirhos2020surprising, chen2021intriguing, robinson2021can, tsai2021conditional} when such dataset bias still remains after applying data augmentations.
To address this question, we first made a series of observations about the dynamics of representations complexity by controlling the degree of spurious correlations in synthetic simulations.
Interestingly, we found that spurious correlations suppress the effective rank \citep{roy2007effective} of latent representations, which severely deteriorates the semantic diversity of representations and leads to the degradation of feature discriminability.
Another notable aspect of our findings is that the intentional increase of feature redundancy leads to amplifying “prejudice” in neural networks. To be specific, as we enforce the correlation among latent features to regularize the effective rank of representations (i.e., rank regularization), the accuracy on bias-conflicting samples quickly declines while the model still performs reasonably well on the bias-aligned \footnote{The \textit{bias-aligned} samples refer to data with a strong correlation between (potentially latent) spurious features and target labels. The \textit{bias-conflicting} samples refer to the opposite cases where spurious correlations do not exist.} samples.
Inspired by these observations, we propose a self-supervised debiasing framework that can fully utilize potentially biased unlabeled samples. We pretrain (1) a biased encoder with rank regularization, which serves as a semantic bottleneck limiting the semantic diversity of feature components, and (2) the main encoder with standard self-supervised learning approaches. {Specifically, the biased encoder affords us the leverage to uncover spurious correlations and identify bias-conflicting training samples in a downstream task.}
Various experiments on real-world biased datasets demonstrate that retraining linear classifier in the last layer with upweighting identified bias-conflicting samples significantly improves the OOD generalization in the linear evaluation protocol \citep{oord2018representation}, even without making any modifications on the pretrained encoder. Our approach improves the accuracy on bias-conflicting evaluation set by \(36.4\% \rightarrow 59.5\%\), \(48.6\% \rightarrow 58.4\%\) on UTKFace \citep{zhang2017age} and CelebA \citep{liu2015deep} with age and gender bias, respectively, compared to the best self-supervised baseline. {Moreover, we found that the proposed framework outperforms state-of-the-art supervised debiasing methods in semi-supervised learning problem with CelebA.}
\section{Low-rank bias of biased representations}
\label{sec: low-rank}
\subsection{Preliminaries}
\textbf{Preliminaries.} To evaluate the semantic diversity of given representation matrix, we introduce \textit{effective rank} \citep{roy2007effective} which is a widely used metric to measure the effective dimensionality of matrix and analyze the spectral properties of features in neural networks \citep{arora2019implicit, razin2020implicit, huh2021low, baratin2021implicit}:
\begin{definition}
\label{2 def: effective rank}
Given the matrix \(X \in \mathbb{R}^{m \times n}\) and its singular values \(\{\sigma_i\}_{i=1}^{\min{(m, n)}}\), the effective rank \(\rho\) of \(X\) is defined as the shannon entropy of normalized singular values:
\begin{equation}
\label{2 eq: effective rank}
\rho(X) = - \sum_{i=1}^{\min{(m, n)}} \bar{\sigma_i} \log \bar{\sigma_i},
\end{equation}
where \(\bar{\sigma_i} = \sigma_i / \sum_k \sigma_k\) is \(i\)-th normalized singular value. Without loss of generality, we omit the exponentiation of \(\rho(X)\) as done in \citet{roy2007effective}.
\end{definition}
Effective rank is also referred to as spectral entropy where its value is maximized when the singular values are all equal and minimized when a top singular value dominates relative to all others. Recent works \citep{chen2019transferability, chen2019catastrophic} have revealed that the discriminability of representations resides on wide range of eigenvectors since the rich discriminative information for the classification task cannot be transmitted by only few eigenvectors with top singular values. Thus from a spectral analysis perspective, effective rank quantifies how diverse the semantic information encoded by each eigenfeature are, which is closely related to the feature discriminability across target label categories. In the rest of paper, we interchangeably use effective rank and rank by following prior works.
\subsection{Spectral analysis of the bias-rank relationships}
\textbf{Degree of spurious correlations.} We now present experiments showing that the deep networks may tend to encode lower rank representations in the presence of stronger spurious correlations. To arbitrarily control the degree of spurious correlations, we introduce synthetic biased datasets, Color-MNIST (CMNIST) and Corrupted CIFAR-10 (CIFAR-10C) \citep{hendrycks2019benchmarking}, with color and corruption bias types, respectively. We define the degree of spurious correlations as the ratio of bias-aligned samples included in the training set, or bias ratio, where most of the samples are bias-aligned in the context of strong spurious correlations. Figure \ref{2 fig: cmnist rank} shows that the rank of latent representations from a penultimate layer of the classifier decreases as the bias ratio increases in CMNIST. We provide similar rank reduction results of CIFAR-10C in the supplementary material.
\begin{figure}[tbp]
\centering
\begin{subfigure}[c]{0.25\textwidth}
\includegraphics[width=\textwidth]{figures/unbias_corr.png}
\subcaption[c]{Unbiased correlation}
\label{2 fig: unbias corr}
\end{subfigure}
\begin{subfigure}[c]{0.25\textwidth}
\includegraphics[width=\textwidth]{figures/bias_corr.png}
\subcaption[c]{Biased correlation}
\label{2 fig: bias corr}
\end{subfigure}
\begin{subfigure}[c]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/rank_cmnist.png}
\subcaption[c]{Color bias}
\label{2 fig: cmnist rank}
\end{subfigure}
\begin{subfigure}[c]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/rank_cmnist_reverse_y.png}
\subcaption[c]{Digit bias}
\label{2 fig: cmnist rank reverse}
\end{subfigure}
\begin{subfigure}[c]{0.58\textwidth}
\includegraphics[width=\textwidth]{figures/acc_major_cmnist.png}
\subcaption[c]{Subsampling results}
\label{2 fig: subsampling}
\end{subfigure}
\begin{subfigure}[c]{0.4\textwidth}
\includegraphics[width=\textwidth]{figures/lambda_normalized_cmnist_cifar.png}
\subcaption[c]{Spectral analysis}
\label{2 fig: lambda}
\end{subfigure}
\caption{Empirical analysis on rank reduction phenomenon. (\textbf{a}, \textbf{b}): Hierarchically clustered auto-correlation matrix of unbiased and biased representations (Bias ratio=99\(\%\)). (\textbf{c}, \textbf{d}): Effective rank with treating color or digit as dataset bias, respectively. `Unbiased' represents the case training model with perfectly unbiased dataset, i.e., assign random color for each training sample. (\textbf{e}): Unbiased test accuracy (left) and effective rank (right) measured with subsampling bias-aligned samples. Subsampling ratio denotes the ratio of removed samples among the total bias-aligned samples. (\textbf{f}): SVD analysis with max-normalized singular values. Top 100 values are shown in the figure (Total: 256).}
\end{figure}
We further compare the correlation matrix of biased and unbiased latent representations in the penultimate layer of biased and unbiased classifiers, respectively. In Figure \ref{2 fig: unbias corr} and \ref{2 fig: bias corr}, we observe that the block structure in the correlation matrix is more evident in the biased representations after the hierarchical clustering, indicating that the features become highly correlated which may limit the maximum information capacity of networks.
We also measure the effective rank by varying the subsampling ratio \citep{japkowicz2002class} of bias-aligned samples. Subsampling controls the trade-off between the dataset size and the ratio of bias-conflicting samples to bias-aligned samples, i.e., conflict-to-align ratio, where subsampling of bias-aligned samples reduces the dataset size but increases the conflict-to-align ratio. Figure \ref{2 fig: subsampling} shows that the effective rank is aligned well with the conflict-to-align ratio or generalization performance, whereas it is not along with the dataset size.
\textbf{Simplicity bias.} Here, we argue that the rank reduction is rooted in the simplicity bias \citep{shah2020pitfalls, hermann2020shapes} of deep networks. Specifically, we reverse the task where the color is now treated as a target variable of networks, and the digit is spuriously correlated to the color, as done in \citep{nam2020learning}. Digits are randomly assigned to color in an unbiased evaluation set. Figure \ref{2 fig: cmnist rank reverse} shows that the rank reduction is not reproduced in this switched context where the baseline levels of effective rank are inherently low. It intuitively implies that the rank reduction is evidence of reliance on easier-to-learn features, where the rank does not decrease progressively if the representation is already sufficiently simplified.
\textbf{Spectral analysis.} To investigate the rank reduction phenomenon in-depth, we compare the normalized singular values of biased and unbiased representations. Specifically, we conduct singular value decomposition (SVD) on the feature matrices of both biased and unbiased classifiers and plot the singular values normalized by the spectral norm of the corresponding matrix. Figure \ref{2 fig: lambda} shows that the top few normalized singular values of biased representations are similar to or even greater than that of unbiased representations. However, the remaining majority of singular values decay significantly faster in biased representations, greatly weakening the informative signals of eigenvectors with smaller singular values and deteriorating feature discriminability \citep{chen2019transferability, chen2019catastrophic}.
\subsection{Rank regularization}
Motivated from the rank reduction phenomenon, we ask an opposite-directional question: ``Can we intentionally amplify the prejudice of deep networks by \textit{maximizing} the redundancy between the components of latent representations?". If the feature components are extremely correlated, the corresponding representations may exhibit most of its spectral energy along the direction of one singular vector. For this case, effective rank may converge to 0. In other words, our goal is to design a \textit{semantic bottleneck} of representations which restricts the semantic diversity of feature vectors. To implement the bottleneck in practice, we compute the auto-correlation matrix of the output of encoder. Throughout the paper, we denote \(x \in \mathbb{R}^m\) and \(y \in \mathcal{Y}\) as \(m\)-dimensional input sample and its corresponding predicting label, respectively. Then we denote \(X=\{x_k\}_{k=1}^n\) as a batch of \(n\) samples from a dataset which is fed to an encoder \(f_\theta: \mathbb{R}^{m} \rightarrow \mathbb{R}^{d}\), parameterized by \(\theta\). Then we construct a matrix \(Z \in \mathbb{R}^{n \times d}\) where each \(i\)th row is the output representations of the encoder \(f_\theta(x_i)^T\) for \(x_i \in X\). Let \(\bar{Z}\) denotes the mean-centered \(Z\) along the batch dimension. The normalized auto-correlation matrix \(C \in \mathbb{R}^{d \times d}\) of \(\bar{Z}\) is defined as follow:
\begin{equation}
\label{2.2 eq: auto correlation}
C_{i, j} = \frac{\sum_{b=1}^{n} \bar{Z}_{b, i} \bar{Z}_{b, j}}{\sqrt{\sum_{b=1}^n \bar{Z}_{b, i}^2} {\sqrt{\sum_{b=1}^n \bar{Z}_{b, j}^2}}} \quad \forall 1 \leq i, j \leq d,
\end{equation}
where \(b\) is an index of sample and \(i, j\) are index of each vector dimension. Then we define our regularization term as negative of a sum of squared off-diagonal terms in \(C\):
\begin{equation}
\label{2.2 eq: rank reg}
\ell_{reg}(X; \theta) = - \sum_{i} \sum_{j\neq i} C_{i,j}^2,
\end{equation}
where we refer to it as a rank loss. Note that the target labels on \(X\) is not used at all in formulation.
\begin{figure}[tp]
\centering
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/cmnist_groupacc.png}
\subcaption[c]{CMNIST}
\label{2 fig: cmnist group}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/cifar_groupacc.png}
\subcaption[c]{CIFAR-10C}
\label{2 fig: cifar group}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/waterbird_groupacc.png}
\subcaption[c]{Waterbirds}
\label{2 fig: waterbirds group}
\end{subfigure}
\caption{(\textbf{a}, \textbf{b}): Bias-conflict and Bias-aligned accuracy on CMNIST and CIFAR-10C (bias ratio=1$\%$). (\textbf{c}): Group accuracy on Waterbirds.}\label{2 fig: rank reg}
\end{figure}
\begin{table}[htbp]
\centering
\begin{subtable}[h]{0.45\textwidth}
\centering
\begin{tabular}{c c c}
\toprule
{} & Precision ($\%$) & Recall ($\%$) \\
\midrule
ERM & 85.59 & 19.76\\
\midrule
+ Rank reg & \textbf{98.83} & \textbf{95.91} \\
\bottomrule
\end{tabular}
\caption{CMNIST}
\label{2 table: cmnist precision}
\end{subtable}
\begin{subtable}[h]{0.45\textwidth}
\centering
\begin{tabular}{c c c}
\toprule
{} & Precision ($\%$) & Recall ($\%$) \\
\midrule
ERM & 52.03 & 0.06 \\
\midrule
+ Rank reg & \textbf{71.39} & \textbf{51.43} \\
\bottomrule
\end{tabular}
\caption{CIFAR-10C}
\label{2 table: cifar precision}
\end{subtable}
\caption{Precision and recall of identified bias-conflicting samples in error set of ERM model trained with and without rank regularization. Bias ratio=1$\%$ for both dataset. $\lambda_{reg}=35$ and $\lambda_{reg}=20$ are used for CMNIST and CIFAR-10C, respectively. Capacity control techniques (e.g., strong $\ell_2$ regularization, early-stopping, \cite{liu2021just, sagawa2019distributionally}) are not used to emphasize the contribution of rank regularization.}
\label{2 table: precision}
\end{table}
To investigate the impacts of rank regularization, we construct the classification model by combining the linear classifier \(f_W: \mathbb{R}^d \rightarrow \mathbb{R}^c\) parameterized by \(W \in \mathcal{W}\) on top of the encoder \(f_\theta\), where \(c = |\mathcal{Y}|\) is the number of classes. Then we trained models by cross entropy loss \(\ell_{CE}\) combined with \(\lambda_{reg} \ell_{reg}\), where \(\lambda_{reg} > 0\) is a Lagrangian multiplier. We use CMNIST, CIFAR-10C, and Waterbirds dataset \citep{wah2011caltech}, and evaluate the trained models on an unbiased test set following \cite{nam2020learning, lee2021learning}. After training models with varying the hyperparameter \(\lambda_{reg}\), we compare bias-aligned and bias-conflict accuracy, which are the average accuracy on bias-aligned and bias-conflicting samples in the unbiased test set, respectively, for CMNIST and CIFAR-10C. Test accuracy on every individual data group is reported for Waterbirds.
Figure \ref{2 fig: rank reg} shows that models suffer more from poor OOD generalization as trained with larger \(\lambda_{reg}\). The average accuracy on bias-conflicting groups is significantly degraded, while the accuracy on bias-aligned groups is maintained to some extent. It implies that rank regularization may force deep networks to focus on spurious attributes. Table \ref{2 table: precision} supports that the biased models with strong regularization can effectively probe out the bias-conflicting samples in the training set. Specifically, we train a biased classifier with rank regularization and distill an error set \(E\) of misclassified training samples as bias-conflicting samples proxies. As reported in Table \ref{2 table: precision}, we empirically observe that our biased classifier is relatively robust to the unintended memorization of bias-conflicting samples \citep{sagawa2020investigation} in contrast to the standard models trained by Empirical Risk Minimization (ERM).
\section{DeFund: Debiasing framework with unlabeled data}
Motivated by the observations in Section \ref{sec: low-rank}, we propose a self-supervised debiasing framework with unlabeled data, coined DeFund. The most important difference from prior works is that the proposed framework can intentionally learn biased representations without human supervision. Recent methods \citep{bahng2020learning, nam2020learning, liu2021just, lee2021learning, zhang2022correct} train a biased model to uncover the spurious correlations and guide the main model to focus on samples that the biased model struggles to predict, which are seemingly the ones conflicting with the bias. While these methods require a bias label or a target label to train biased representations, we obtain such biased representations for free using self-supervised learning and rank regularization.
The proposed framework is composed of two stages: We first train the biased encoder, which can be potentially adopted to detect the bias-conflicting samples in a downstream task, along with the main encoder by self-supervised learning, both without any labels. After pretraining, we identify the bias-conflicting samples in the downstream task using linear evaluation protocol \citep{oord2018representation, chen2020simple}. This set of samples serves as a boosting to debias the main model.
We denote \(f^{bias}_{\theta}: \mathcal{X} \rightarrow \mathbb{R}^d\) and \(f^{main}_{\phi}: \mathcal{X} \rightarrow \mathbb{R}^d\) as biased encoder and main encoder parameterzied by \(\theta \in \Theta\) and \(\phi \in \Theta\), respectively, where \(d\) is the dimensionality of latent representations. Then we can compute the rank loss in (\ref{2.2 eq: rank reg}) with introduced encoders and given batch \(\{x_k\}_{k=1}^N\) with size \(N\). Let \(f^{cls}_{W_b}: \mathbb{R}^{d} \rightarrow \mathbb{R}^C\) be a single-layer classifier parameterized by \(W_b \in \mathcal{W}\) which is placed on top of biased encoder \(f^{bias}_\theta\), where \(C = |\mathcal{Y}|\) is the number of classes. We similarly define the linear classifier \(f^{cls}_{W_m}\) for the main encoder. Then we refer to \(f^{bias}: \mathcal{X} \rightarrow \mathbb{R}^C\) as biased model, where \(f^{bias}(x) = f^{cls}_{W_b}\big( f^{bias}_{\theta}(x) \big), \forall x \in \mathcal{X}\). We similarly define the main model \(f^{main}\) as \(f^{main}(x) = f^{cls}_{W_m}\big( f^{main}_{\phi}(x) \big), \forall x \in \mathcal{X}\). While the projection networks \citep{chen2020simple, khosla2020supervised} are employed as well, we omit the notations because they are not engaged in the linear evaluation after pretraining encoders.
\textbf{Stage 1. Train biased encoder.} To train the biased encoder \(f^{bias}_\theta\), we revisit the proposed rank regularization term in (\ref{2.2 eq: rank reg}) which can control the effective dimensionality of representations for instance discrimination task. We conjecture that the scope of captured features may be restricted to the easy-to-learn ones if the maximum information capacity of the encoder is strongly suppressed. Based on these intuitions, we apply rank regularization directly to the output of the base encoder, which encourages each feature component to be highly correlated. A simple simulation on the synthetic dataset conceptually clarifies the validity of our intuition, where we figured out that the representation becomes more biased as it is trained with stronger regularization, by measuring the bias metric, which quantifies how much the encoder focus on the short-cut attributes (Details provided in supplementary material). Moreover, while the overall performance may be upper-bounded due to the constraint on effective dimensionality \citep{jing2021understanding}, we observed that the bias-conflict accuracy is primarily sacrificed compared to the bias-aligned accuracy (Related experiments in Section \ref{sec: results}).
\textbf{Stage 2. Debiasing downstream tasks.} After training the biased encoder, our next goal is to debias the main model, pretrained on the same dataset with standard self-supervised learning approaches, e.g., \citet{chen2020simple, chen2021exploring}. To achieve this, we recall the recent work which explains the contrastive learning as a protocol inverting the data generating process; \citet{zimmermann2021contrastive} demonstrates that the pretrained encoder with a contrastive loss from the InfoNCE family can recover the true latent factors of variation under some statistical assumptions. That being said, imagine that we have an ideal pretrained encoder of which each output component corresponds to the latent factor of data variation. Then one may expect that this encoder perfectly fits downstream classification tasks, where the only remaining job is to find out the optimal weights of these factors for prediction. However, if most samples in the downstream task are bias-aligned, these samples may misguide the model to upweight the spuriously correlated latent factors. In other words, the model may reach a biased solution even though it encodes well-generalized representations.
The above contradiction elucidates the importance of bias-conflicting samples, which serve as counterexamples of spuriously correlated feature components, thereby preventing the alleged involvement of such components in prediction. Based on these intuitions, we introduce a novel debiasing protocol that probes and upweights bias-conflicting samples to find and fully exploit feature components independent of spurious correlations. We evaluate our framework on two scenarios: linear evaluation and semi-supervised learning. First, following the conventional protocol of self-supervised learning, we conduct linear evaluation \citep{zhang2016colorful, oord2018representation}, which trains a linear classifier on top of unsupervised pretrained representations by using every labeled training sample. After training a linear classifier \(f^{cls}_{W_b}\) with pretrained biased encoder \(f^{bias}_{\theta}\) given the whole training set \(D=\{(x_k, y_k)\}_{k=1}^{N}\) with size \(N\), an error set \(E\) of misclassified samples and corresponding labels is regarded as bias-conflicting pairs. Then we train a linear classifier \(f^{cls}_{W_m}\) on freezed representations of main encoder \(f^{main}_{\phi}\) by upweighting the identified samples in \(E\) with \(\lambda_{up} > 0\). The loss function for \textit{debiased} linear evaluation is defined as follow:
\begin{equation}
\label{3 eq: upweighted loss}
\ell_{debias}(D; W_m) = \lambda_{up} \sum_{(x, y) \in E} \ell (x, y; W_m) + \sum_{(x, y) \in D \setminus E} \ell (x, y; W_m),
\end{equation}
where we use cross entropy loss for \(\ell: \mathcal{X} \times \mathcal{Y} \times \mathcal{W} \rightarrow \mathbb{R}^{+}\). Note that the target labels are only used in training linear classifiers after pretraining.
While linear evaluation is mainly opted for evaluating self-supervised learning methods, we also compare our method directly to other supervised debiasing methods in the context of semi-supervised learning. Here we assume that the training dataset includes only a small amount of labeled samples combined with a large amount of unlabeled samples. As in linear evaluation, we train a linear classifier on top of the biased encoder by using labeled samples. After obtaining an error set \(E\) of misclassified samples, we finetuned the whole main model by upweighting the identified samples in \(E\) with \(\lambda_{up}\). Note that supervised baselines are restricted to using only a small fraction of labeled samples, while the proposed approach benefits from the abundant unlabeled samples during training of the biased encoder. The pseudo-code of DeFund is provided in the supplementary material.
\section{Results}
\label{sec: results}
\subsection{Methods}
\textbf{Dataset.} To investigate the effectiveness of the proposed debiasing framework, we evaluate several supervised and self-supervised baselines on UTKFace \citep{zhang2017age} and CelebA \citep{liu2015deep} in which prior work has observed poor generalization performance due to spurious correlations. Each dataset includes several sensitive attributes, e.g., gender, age, ethnicity, etc. We consider three prediction tasks: For UTKFace, we conduct binary classifications using (\texttt{Gender}, \texttt{Age}) and (\texttt{Race}, \texttt{Gender}) as (target, spurious) attribute pair, which we refer to UTKFace (age) and UTKFace (gender), respectively. For CelebA, we consider (\texttt{HeavyMakeup}, \texttt{Male}) as (target, spurious) attribute pair. Following \citet{nam2020learning, hong2021unbiased}, we report bias-conflict accuracy together with unbiased accuracy, which is evaluated on the explicitly constructed validation set. We exclude the dataset in Figure \ref{2 fig: rank reg} based on the observations that the encoder trained with SimCLR already encodes invariant representations w.r.t simple spurious attributes, e.g., color bias.
\textbf{Baselines.} We mainly target baselines consisting of recent advanced self-supervised learning methods, SimCLR \citep{chen2020simple}, VICReg \citep{bardes2021vicreg}, and SimSiam \citep{chen2021exploring}, which can be categorized into contrastive (SimCLR) and non-contrastive (VICReg, SimSiam) methods. We further report the performance of vanilla networks trained by ERM, and other supervised debiasing methods such as LNL \citep{kim2019learning}, EnD \citep{tartaglione2021end}, and upweighting-based algorithms, JTT \citep{liu2021just} and CVaR DRO \citep{levy2020large}, which can be categorized into methods that leverage annotations on dataset bias (LNL, EnD) or not (JTT, CVaR DRO).
\textbf{Optimization setting.} Both bias and main encoder is pretrained with SimCLR \citep{chen2020simple} for 100 epochs on UTKFace, and 20 epochs on CelebA, respectively, using ResNet-18, Adam optimizer and cosine annealing learning rate scheduling \citep{loshchilov2016sgdr}. We use a MLP with one hidden layer for projection networks as in SimCLR. All the other baseline results are reproduced by tuning the hyperparameters and optimization settings using the same backbone architecture. We report the results of the model with the highest bias-conflicting test accuracy over those with improved unbiased test accuracy compared to the corresponding baseline algorithms, i.e., SimCLR for ours. The same criteria are applied to supervised baselines, while JTT often sacrifices unbiased accuracy for highly improved bias-conflict accuracy. More details about the dataset and simulation settings are provided in the supplementary material.
\subsection{Evaluation results}
\begin{table}[tbp]
\caption{(Linear evaluation) Bias-conflict and unbiased test accuracy ($\%$) evaluated on UTKFace and CelebA. Models requiring information on target class or dataset bias in (pre)training stage are denoted with \cmark in column Y and B, respectively.}
\label{4. table: linear evaluation}
\centering
\begin{tabular}{c c c c c c c c c}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Model}} & \multicolumn{1}{c}{\multirow{2}{*}{Y}} & \multicolumn{1}{c}{\multirow{2}{*}{B}} & \multicolumn{2}{c}{UTKFace (age)} & \multicolumn{2}{c}{UTKFace (gender)} & \multicolumn{2}{c}{CelebA} \\
\cmidrule{4-9}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & Conflict & Unbiased & Conflict & Unbiased & Conflict & Unbiased \\
\midrule
{LNL} & {\cmark} & {\cmark} & {45.8} & {72.6} & {73.1} & {84.9} & {55.9} & {76.0} \\
{EnD} & {\cmark} & {\cmark} & {45.3} & {72.2} & {75.5} & {85.5} & {57.3} & {76.4} \\
\midrule
{JTT} & {\cmark} & {\xmark} & {63.8} & {69.4} & {71.2} & {77.6} & {62.4} & {74.7} \\
{CVaR DRO} & {\cmark} & {\xmark} & {45.7} & {71.4} & {68.6} & {81.0} & {58.0} & {76.5} \\
\midrule
{ERM} & {\cmark} & {\xmark} & {45.4} & {71.0} & {65.7} & {79.5} & {54.2} & {74.1} \\
\rowcolor{Gray}
{SimSiam} & {\xmark} & {\xmark} & {28.2} & {62.6} & {48.5} & {69.8} & {39.9} & {66.7} \\
\rowcolor{Gray}
{VICReg} & {\xmark} & {\xmark} & {32.3} & {64.6} & {51.0} & {71.3} & {48.6} & {71.9} \\
\rowcolor{Gray}
{SimCLR} & {\xmark} & {\xmark} & {36.4} & {66.3} & {56.3} & {74.2} & {46.9} & {69.8} \\
\rowcolor{Gray}
{\textbf{DeFund}} & {\xmark} & {\xmark} & {\textbf{59.5}} & {\textbf{70.6}} & {\textbf{63.7}} & {\textbf{74.9}} & {\textbf{58.4}} & {\textbf{73.1}} \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{(Semi-supervised learning) Bias-conflict and unbiased accuracy evaluated on CelebA. Label fraction is set to $10\%$. Each first and second \cmark marker represents whether the model requires information on target class or dataset bias in pretraining stage, respectively.}
\label{4. table: semi-supervised}
\centering
\begin{tabular}{c c c c c c c c}
\toprule
\multirow{2}{*}{Accuracy} & {LNL} & {EnD} & {JTT} & {CVaR DRO} & {ERM} & {SimCLR} & {\textbf{DeFund}} \\
{} & {\cmark \cmark} & {\cmark \cmark} & {\cmark \xmark} & {\cmark \xmark} & {\cmark \xmark} & {\xmark \xmark} & {\xmark \xmark} \\
\midrule
{Bias-conflict ($\%$)} & {55.7} & {55.3} & {51.5} & {55.6} & {51.5} & {50.5} & {\textbf{60.5}} \\
\midrule
{Unbiased ($\%$)} & {75.6} & {\textbf{76.2}} & {71.4} & {75.7} & {73.1} & {71.6} & {75.6} \\
\bottomrule
\end{tabular}
\end{table}
\textbf{Linear evaluation.} The bias-conflict and unbiased test accuracy are summarized in Table \ref{4. table: linear evaluation}. We found that DeFund outperforms every self-supervised baseline by a large margin, including SimCLR, SimSiam and VICReg, with respect to both bias-conflict and unbiased accuracy. Moreover, in some cases, DeFund even outperforms ERM models or supervised debiasing approaches regarding bias-conflict accuracy. Note that there is an inherent gap between ERM models and self-supervised baselines, roughly \(8.7\%\) on average. Moreover, we found that non-contrastive learning methods generally perform worse than the contrastive learning method. This warns us against training the main model using a non-contrastive learning approach, while it may be a viable option for the biased model. We provide results of the proposed framework implemented with non-contrastive learning methods in the supplementary material.
\textbf{Semi-supervised learning.} To compare the performance of supervised and self-supervised methods in a more fair scenario, we sample \(10\%\) of the labeled CelebA training dataset at random for each run. The remaining \(90\%\) samples are treated as unlabeled ones and engaged only in pretraining encoders for self-supervised baselines. Labeled samples are provided equally to both supervised and self-supervised methods.
Remarkably, Table \ref{4. table: semi-supervised} shows that the proposed framework outperforms all the other state-of-the-art supervised debiasing methods. Notably, only about \(16\) samples remain within (\texttt{Gender=1, HeavyMakeup=1}) group after subsampling. Thus it is almost impossible to prevent deep networks from memorizing those samples even with strong regularization if we train the networks from scratch, which explains the failure of existing upweighting protocols such as JTT. In contrast, the proposed framework can fully take advantage of unlabeled samples where contrastive learning help prevent memorization of the minority counterexamples \citep{xue2022investigating}. It highlights the importance of pretraining using unlabeled samples that most prior debiasing works do not consider. Moreover, such implicit bias of deep networks towards memorizing samples may seriously deteriorate the performance of existing bias-conflicting sample mining algorithms \citep{kim2021biaswap, zhao2021learning, nam2020learning} when the number of labeled samples is strictly limited. However, such failure is unlikely to be reproduced in the proposed framework since we only train a simple linear classifier on top of a freezed biased encoder to identify such bias-conflicting samples.
\begin{table}[htbp]
\caption{Ablation study on introduced modules. Accuracy is reported in ($\%$).}
\label{4. table: ablation}
\centering
\begin{tabular}{c c c c c c c}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Method}} & \multicolumn{2}{c}{UTKFace (age)} & \multicolumn{2}{c}{UTKFace (gender)} & \multicolumn{2}{c}{CelebA} \\
\cmidrule{2-7}
\multicolumn{1}{c}{} & Conflict & Unbiased & Conflict & Unbiased & Conflict & Unbiased \\
\midrule
\rowcolor{Gray}
{SimCLR} & {36.4} & {66.3} & {56.3} & {74.2} & {46.9} & {69.8} \\
{+ Rank reg} & {26.6} & {61.3} & {50.9} & {70.3} & {43.9} & {68.3} \\
{+ Upweight} & {53.0} & {64.6} & {58.3} & {74.5} & {50.1} & {70.4} \\
\rowcolor{Gray}
{\textbf{DeFund}} & {\textbf{59.5}} & {\textbf{70.6}} & {\textbf{63.7}} & {\textbf{74.9}} & {\textbf{58.4}} & {\textbf{73.1}} \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{Precision and recall ($\%$) of bias-conflicting samples identified by SimCLR and our biased model. Both case used linear evaluation.}
\label{4. table: precision}
\centering
\begin{tabular}{c c c c c c c}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Method}} & \multicolumn{2}{c}{UTKFace (age)} & \multicolumn{2}{c}{UTKFace (gender)} & \multicolumn{2}{c}{CelebA} \\
\cmidrule{2-7}
\multicolumn{1}{c}{} & Precision & Recall & Precision & Recall & Precision & Recall \\
\midrule
{SimCLR} & {68.31} & {44.63} & {\textbf{33.36}} & {39.59} & {52.25} & {28.23} \\
\midrule
{\textbf{DeFund}} & {\textbf{68.67}} & {\textbf{75.94}} & {29.98} & {\textbf{50.93}} & {\textbf{55.29}} & {\textbf{32.46}} \\
\bottomrule
\end{tabular}
\end{table}
\textbf{Ablation study.} To quantify the extent of performance improvement achieved by each introduced module, we compared the linear evaluation results of (a) vanilla SimCLR, (b) SimCLR with rank regularization, (c) SimCLR with upweighting error set \(E\) of the main model, and (d) Full model DeFund. Note that (c) does not use a biased model at all. Table \ref{4. table: ablation} shows that every module plays an important role in OOD generalization. Considering that the main model is already biased to some extent, we found that bias-conflict accuracy can be improved even without a biased model, where the error set \(E\) of the biased model further boosts the generalization performance. We also quantify how well the biased model captures bias-conflicting samples by measuring the precision and recall of identified bias-conflicting samples in \(E\). As reported in Table \ref{4. table: precision}, the biased model detects more diverse bias-conflicting samples compared to the baseline for free or with affordable precision costs. While the improvement of recall in CelebA may seem relatively marginal, a large quantity of bias-conflicting samples is additionally identified in practice considering that CelebA includes much more samples than UTKFace.
\section{Discussions and Conclusion}
\textbf{Contributions.} In this paper, we (\textbf{a}) first unveil the catastrophic adverse impacts of spurious correlations on the effective dimensionality of representations. Based on these findings, we (\textbf{b}) design a rank regularization that amplifies the feature redundancy by reducing the spectral entropy of latent representations. Then we (\textbf{c}) propose a debiasing framework empowered by the biased model pretrained with abundant unlabeled samples.
\textbf{Comparisons to related works.} Our observations are in line with the simplicity bias of gradient descent-based optimizations, where many recent studies \citep{rahaman2019spectral, shah2020pitfalls} have revealed that networks tend to exploit the simplest feature at the expense of a small margin and often ignore the complex features. Similar observations have been made confined to self-supervised learning named feature suppression, where the encoder may heavily rely on the attributes that make the instance discrimination tasks easier. While these existing works often focus on the innate preference of models on input cues \citep{hermann2020shapes, scimeca2021shortcut}, we provide a novel perspective on the practical impacts of spurious correlations on deep latent representations: reduction of effective rank.
\citet{robinson2021can} proposes an opposite-directional approach compared to our framework to improve generalizations of self-supervised learning. It aims to overcome the feature suppression and learn a wide variety of features by Implicit Feature Modification (IFM), which adversarially perturbs feature components of the current representations used to discriminate instances, thereby encouraging the encoder to use other informative features. We observed that IFM improves the bias-conflict accuracy by about 1$\%$ on UTKFace (age) in Table \ref{4. table: ifm}, which is roughly consistent with the performance gains on the standard benchmarks, e.g., STL10, reported in the original paper. However, its performance gain is relatively marginal compared to the proposed framework.
\begin{table}[htbp]
\caption{Results of Implicit Feature Modification \citep{robinson2021can} with SimCLR on UTKFace (age). we denote $\epsilon$ as the adversarial budget of feature modification as in the original paper.}
\label{4. table: ifm}
\centering
\begin{tabular}{c c c c c}
\toprule
{Accuracy} & {SimCLR} & {$\epsilon=0.05$} & {$\epsilon=0.1$} & {$\epsilon=0.5$} \\
\midrule
{Bias-conflict ($\%$)} & {36.4} & {\textbf{37.5}} & {36.4} & {33.7} \\
\midrule
{Unbiased ($\%$)} & {66.3} & {\textbf{66.5}} & {66.2} & {64.6} \\
\bottomrule
\end{tabular}
\end{table}
\textbf{Future directions.} While this work has focused on intentionally encoding biased representations, we argue that more advances should be concurrently made in learning both biased and debiased representations, as partially discussed above \citep{robinson2021can}. The interplay between those bidirectional modules may further improve generalizations. We also note that the proposed rank regularization is one possible implementation of the semantic bottleneck. While we explicitly control the feature correlations, we believe such design can be employed more implicitly. We provide experiments examining the potential of existing hyperparameters, e.g., the temperature in InfoNCE loss, etc., as a bias controller in the supplementary material. {Lastly, we believe that a semi-supervised learning scenario should be part of a standard evaluation pipeline where many supervised baselines may fail due to the inductive bias of networks towards memorizing a few counterexamples.}
\clearpage
|
{
"timestamp": "2022-10-12T02:10:49",
"yymm": "2210",
"arxiv_id": "2210.05248",
"language": "en",
"url": "https://arxiv.org/abs/2210.05248"
}
|
\section{Introduction}
\subsection{Blockchains and Smart-Contracts}
Blockchains are distributed immutable ledgers organized in peer to peer
networks, allowing participants to securely transfer tokens
without a central authority.
Some blockchains allow programmable transactions in the form
of computer programs. These \emph{smart-contracts} define
complex transactions between blockchain participants and maintain
a persistent state across runs. They can be viewed as a novel way
for multiple users to securely exchange, build value sharing or distribution
applications without a trusted third-party. Applications include auction
sales, decentralized exchanges, collective organizations, investment funds, etc.
Smart-contracts are relatively small, and not resource intensive as they have to be executed on all blockchain network nodes.
However, compared to usual programming languages, they have an unconventional
execution model tightly tied to the blockchain implementation, thus are
non-intuitive to program.
Compounded with the inability to update smart-contracts on the (immutable) blockchain, and applications manipulating large sums of money, this leads to costly errors.
Notable vulnerability examples include: a reentrancy issue in
\emph{The DAO}~\cite{del2016dao},
a smart-contract implementing a venture capital
fund, that allowed a user to steal \$60 million; the \emph{Parity} wallet
bug~\cite{palladino2017parity}, which froze \$150 million by allowing
unauthenticated users to call restricted functions; and the
Proof-of-Weak-Hands attack, that allowed attackers to steal \$800,000
overnight~\cite{powh:1}, and \$2.3 million then after, abusing an integer
overflow. Thereby, there is a high motivation to statically detect potential
misbehavior or prove their absence when possible.
\subsection{Motivating Example}
\begin{figure}[t]
\begin{lstlisting}[language=ocaml,xleftmargin=10pt]
storage : (address, mutez) map
entry deposit () {
let owned = match Map.get $storage $sender with
| None -> 0
| Some v -> v in
(Map.add $sender (owned + $amount) $storage, [])
}
entry withdraw (asked : mutez, dest : address) {
assert ($amount == 0);
// Fix to ensure proper access control:
// if dest != \$sender then failwith "unauthorized";
let owned = match Map.get $storage dest with
| None -> failwith "empty account"
| Some v -> v in
if asked > owned then
failwith "not enough tokens";
(Map.add dest (owned - asked) $storage,
[transfer $sender asked])
}\end{lstlisting}
\caption{A smart-contract with incorrect authentication} \label{fig:motiv}
\end{figure}
We focus on the verification by static analysis of
smart-contracts for the Tezos blockchain programmed in the Michelson
\cite{michelson:1, michelson:2} language.
Our goal is to analyze \emph{Dexter}~\cite{dexter:1}, an important smart-contract
implementing a decentralized exchange with alternate blockchain currencies
(\emph{bitcoins}, \emph{usdtz}). Its initial version featured a vulnerability~\cite{dexter:bug}
allowing an attacker to steal parts of the contract funds.
As this is a work in progress, we report on a preliminary analysis of a
simplified version only.
Consider the contract in \fref{fig:motiv} inspired from Dexter
and written in an ML-style pseudo-code.
It implements a simple wallet, allowing users to deposit to or withdraw from
their personal account some amount in \micheltype{mutez}
(the currency on the Tezos blockchain).
A \emph{map} keeps track of user accounts: it maps users, identified by their blockchain \emph{address},
to the deposited amount.
The map is kept on the blockchain, in a so-called \emph{storage}, updated after
each transaction.
An execution of the smart-contract starts with the following variables:
\begin{compactitem}[$\bullet$]
\item \michelinstr{$storage} is the storage value currently on the blockchain.
\item \michelinstr{$sender} is the address of the initiator of the contract call
(a user, or another smart-contract).
\item \michelinstr{$amount} is the amount of mutez transferred to the contract.
Every call is a transfer, possibly with a 0 amount.
\end{compactitem}
A contract can define several independently callable entry points. They allow
splitting functionalities sharing the same storage.
In our case:
\begin{compactitem}[$\bullet$]
\item \emph{deposit} allows a user to deposit an amount. The
\michelinstr{$amount} sent to the contract is actually
recorded to belong to the user by updating his balance in
the map (\texttt{Map.add}, line~6).
\item \emph{withdraw} allows a user to transfer back some
amount from the contract, unless his account in the map
does not hold sufficient funds (\texttt{if asked > owned}, line~15).
The map is updated (\texttt{Map.add}, line~17) and a transfer back
to the sender is generated (\texttt{transfer}, line~18).
\end{compactitem}
Michelson features a purely functional execution model: the new value
of the storage as well as any effect (e.g., additional transfers) are
provided in the return value of the call.
This example actually contains a logic error: it allows a user to
transfer to himself \micheltype{mutez} that were owned by someone else.
Indeed, the \emph{dest} parameter used as key in the map in \texttt{withdraw}
is controlled by the user who calls the contract.
A fix is provided in a comment line~11: it ensures that \texttt{dest} equals
the caller \michelinstr{$sender}.
In this example, we want to verify the high-level property stating that:
$$(key = \$sender) \lor (new\_value \geq old\_value)$$
whenever the map is updated on \texttt{key} from \texttt{old\_value} to
\texttt{new\_value}, i.e., a user can only add funds to another user's account
and can subtract funds only from his own account.
\subsection{Related Work}
A number of tools have been designed to help smart-contract developers
catch bugs or vulnerabilities. Most of them target the \emph{Ethereum}
platform. This includes symbolic execution tools,
like Maian~\cite{nikolic2018finding}, Manticore~\cite{mossberg2019manticore},
Oyente~\cite{luu2016making}, Zeus~\cite{kalra2018zeus},
Securify~\cite{tsankov2018securify}, Mythril~\cite{mueller2020introducing}.
Some tools rely on exhaustive state exploration, via model checking or
SMT solving, sometimes leading to a slow analysis~\cite{aryal2021comparison},
timeouts~\cite{ren2021:1}, or lack of results~\cite{aryal2021comparison}.
\cite{kalra2018zeus} relies on Abstract Interpretation for a
preliminary analysis, and uses an SMT solver to check properties
on inferred invariants.
\cite{ghaleb2020effective,durieux2020empirical,clairvoyance:1,dia:1}
report that many existing tools fail to detect some issues, i.e., report false
negatives.
\cite{ghaleb2020effective,schneidewind2020ethor,clairvoyance:1}
affirm that some tools can fail to prove properties because their analyses are
unsound.
Some tools focus on low-level properties affecting the popular Ethereum
platform, like reentrancy issues~\cite{rodler2018sereum,liu2018reguard} or
overflows~\cite{easyflow:1}.
By contrast, Michelson~\cite{michelson:1, michelson:2}, the language of the Tezos blockchain,
which is the focus of our work, has fewer opportunities for runtime errors and
a stricter execution model eliminating reentrancy issues; low-level properties
are less of an interest.
Checking higher-level properties is feasible using proof assistants,
but requires a large effort to
prove even simple properties. Mi-cho-Coq~\cite{michocoq:1} provides a
Michelson Coq embedding allowing the certification of smart-contract properties, but
requires manual developments of proofs, and small changes in a contract
require new proof developments. The Micse project~\cite{micse:1} allows for
automated static analysis, using the Z3 SMT solver. The
Tezla~\cite{reis2020tezla} project allows translating the Michelson instructions
into a suitable intermediate representation for dataflow analysis.
\subsection{The MOPSA Static Analyzer}
\emph{MOPSA}~\cite{mine-VSTTE19} is a modular and extensible static analyzer
based on Abstract Interpretation~\cite{cousot1977abstract}.
It features a C analyzer detecting runtime errors and invalid preconditions
when calling the C library, as well as Python type, value, and uncaught
exception analyses.
Its modular design allows sharing and reusing abstract domains across
multiple analyses. Its \emph{AST} structure can be extended
to support novel languages, keeping a high-level representation
without static translation.
MOPSA strongly relies on domain cooperation.
An analysis is defined as a combination of small domain modules, including
value abstractions and syntax iterators that can
be plugged in or out, depending on the target language and properties.
It provides a common set of domains to build value analyses
with intervals, relational domains like
octagons \cite{mine-HOSC06} or polyhedra \cite{ch:popl78}
to infer linear relations, recency abstraction \cite{recency} for
memory blocks, etc.
In addition to reductions, domains cooperate through expression rewriting.
For instance, a domain handles C arrays by rewriting array accesses
dynamically as accesses into scalar variables representing array cells.
Expression rewriting helps writing small, independent, and reusable
domains, that rely only on the manipulation of variables the state
of which is managed by other, lower-level domains.
\subsection{Contribution}
We have developed an analysis of Michelson programs \cite{michelson:1, michelson:2}
based on Abstract Interpretation. This analysis is built on MOPSA
\cite{mine-VSTTE19}. It reuses domains provided by MOPSA and provides
novel domains to support the semantics of the Michelson language.
This includes support for specific Michelson, ML-like types, such as pairs,
unions, sets, maps, etc. as well as iterators to handle the execution model for
contracts on the Tezos platform, including contract interactions.
Our tool can currently statically detect runtime errors
like \emph{overflows}, \emph{shift overflows}, and Michelson contracts
always terminating in a failure state.
We also demonstrate its potential to prove higher-level correctness
properties on the example contract from \fref{fig:motiv}.
By using abstract interpretation, our analysis is sound and efficient,
but can raise false alarms.
\sectionref{sec:simpleanalysis} presents a basic set of
abstract domains that are sufficient to cover the complete semantic
of Michelson instructions and achieve an initial, sound, low-precision
analysis;
\sref{sec:executionmodel} presents
the support for the Tezos transaction execution model, including
contracts calling external contracts and inferring invariants on unbounded
sequences of calls to contracts;
\sref{sec:higher} presents more involved abstractions necessary
to prove the correctness of \fref{fig:motiv};
\sref{sec:experiments} presents our experimental evaluation;
\sref{sec:conclusion} concludes.
\section{Michelson Value Analysis}\label{sec:simpleanalysis}
For convenience, \fref{fig:motiv} presented a
smart-contract example in a high-level, ML-like syntax.
Michelson \cite{michelson:1}, the language actually executed
on the Tezos blockchain, is a high-level \emph{stack-based} language
that takes inspiration from Forth~\cite{forth} or Joy~\cite{joy:1},
while including many aspects from functional languages:
strong static typing, immutable values, anonymous functions,
algebraic data-types, functional \emph{list}, \emph{set},
and \emph{map} types.
This section presents the abstract domains added to MOPSA
to handle the stack, data-types, and instructions.
\subsection{The Michelson Language}\label{sec:detailedexample}
\begin{figure}[t]
\begin{lstlisting}[xleftmargin=10pt]
storage nat;
parameter nat;
code { UNPAIR;
ADD;
NIL operation;
PAIR; }
\end{lstlisting}
\caption{Simple Michelson smart-contract}\label{fig:sample1}
\end{figure}
To simplify, we consider here a much simpler
contract, in \fref{fig:sample1}, that performs an
addition into an accumulator stored on the blockchain.
In Michelson, there are no explicit variables.
All values are stored on a stack, implicitly manipulated through
dedicated instructions such as \michelinstr{PUSH}, \michelinstr{DROP},
\michelinstr{DUP}, and using operator instructions (e.g., \michelinstr{ADD}
to perform an addition) replacing arguments at the top
of the stack with the operator result.
When the contract execution starts, the stack contains a
single element: a pair containing the value of the parameter
it has been called with and the value stored on the blockchain
for the contract.
When the execution ends, the contract should leave on
the stack a single value: a pair containing a list of
operations to perform after the contract execution (such as
calling other contracts, or performing a transfer) and
the new value to be stored on the blockchain.
Operations will be discussed in details in \sref{sec:executionmodel}.
For now, the operation list output by a contract will be empty.
The initial value of the storage is specified when the contract
is deployed on the blockchain.
Subsequent executions of the contract update the storage
value.
As the language is statically typed, a contract declares the
type of its storage and parameter.
This corresponds to lines~1--2 in \fref{fig:sample1}.
In the example, both the storage and
parameter have type \micheltype{nat} (i.e., a non-negative integer),
but more complex data structures can be used. For instance,
\fref{fig:motiv} uses a map as storage and its
parameter is a union to model different possible entry points.
When executed, the code from \fref{fig:sample1} proceeds as follows:
\begin{compactitem}[$\bullet$]
\item \michelinstr{UNPAIR} pops the topmost (and only) element from the stack: a pair
with the storage and the parameter, which are pushed as the first and second
items on the stack;
\item \michelinstr{ADD}, pops two elements, adds them, and pushes the result;
\item \michelinstr{NIL operation} builds an empty list of operations, and\\ pushes
it on the stack;
\item \michelinstr{PAIR} pops the addition result and the empty list, and pushes a pair,
resulting in a stack with a single pair element.
\end{compactitem}
Thus, each call to the contract will simply add the integer passed
as parameter to the integer stored on the blockchain.
\subsection{Dynamic Translation into Variables}
MOPSA models the memory as a map from variables to values and supports
instructions, such as assignments and tests, involving expressions
over variables.
This is a common assumption for abstract interpreters as well as
domain libraries (such as APRON \cite{apron:1},
used in MOPSA) and especially useful for relational analyses.
One possibility to handle stack-based languages
is to translate them to variable-based environments and
expressions beforehand, in a pre-processing phase, as performed
for instance in the Sawja framework \cite{sawja} for Java bytecode
as well as Tezla \cite{reis2020tezla} for Michelson.
Instead of a static translation, we extended MOPSA's AST
with a native support for Michelson instructions and relied
on the ability for domains to rewrite statements and expressions as
part of the abstract execution.
We developed a domain that introduces variables to represent stack positions
and translates Michelson instructions into assignments on-demand.
Non-scalar data-types, such as pairs, give rise to several variables per
stack position, as detailed in \sref{sec:types}.
A dynamic translation can potentially use information about the current
precondition to optimize the translated instructions \cite{mine-VSTTE19},
although this is not currently the case for Michelson.
\subsection{Michelson Data-Types}
\label{sec:types}
Michelson supports several integer kinds: arbitrary precision integers
(\micheltype{int}), natural integers (\micheltype{nat}), dates,
and unsigned 63-bit integers (\micheltype{mutez}).
Some operations, such as overflows on \micheltype{mutez}, as well as
shift overflows, are checked runtime errors that halt the contract
execution.
A specific domain in MOPSA handles these types, checking all possible
runtime errors and representing their possible values in standard
numeric domains such as intervals and polyhedra.
Michelson supports simple algebraic types \emph{à la ML} through pairs
(\michelinstr{(a,b)}), option types (\michelinstr{Some a} or \michelinstr{None}), and
tagged unions with two variants (\michelinstr{Left a} or \michelinstr{Right b}).
The type of \michelinstr{a} and \michelinstr{b} is arbitrary, and algebraic
types can be nested.
MOPSA features domains to handle these types.
They create and manage additional variables for each component of
a pair, an option, or a sum, delegate the abstraction of
their value to the domain of the components' type, and translate
operations on algebraic types (such as \michelinstr{PAIR},
\michelinstr{CAR}, etc.) into operations on component variables.
Domains handling scalar values, such as numeric domains, ultimately
work on environments mixing components from different algebraic
values, making it possible to infer relations between values
that appear inside pairs or options.
This technique is similar to that of Bautista et al. \cite{nsad:2020}, but
we support recursive types and do not partition with respect to
which variant is used by each variable.
Michelson has a native support for immutable containers: lists, sets,
and maps.
We propose simple, general-purpose, and efficient, but coarse abstractions to handle
them.
Lists are abstracted using a summary variable to represent the union of all
list elements, and a numeric variable representing its size.
Like algebraic types, list elements can have arbitrary type.
List operations are translated into operations on the variables
(e.g., weak updates of summary variables, size incrementation) and delegated
to the domain appropriate for the type. List iteration \michelinstr{ITER} is handled, as usual in
abstract interpretation, using a fixpoint.
Sets are abstracted similarly to lists with slight adjustments as they cannot
contain duplicate elements.
Maps are abstracted using a summary variable to represent keys and
a summary variable to represent values.
A more involved, property-specific abstraction of maps will be discussed in \sref{sec:smap}.
\subsection{Addresses}\label{sec:addresses}
Michelson has a domain-specific type for addresses, representing
participants on the blockchain: either users (identified by a
public key) or smart-contracts (identified by a hash).
Some addresses play a special role during contract execution, and can
be accessed using dedicated instructions.
As detailed in \sref{sec:executionmodel}, a contract execution
can be triggered by the execution of another contract.
\michelinstr{SOURCE} represents the user at the origin of
a chain of calls, while \michelinstr{SENDER} is the immediate
caller of the contract.
These variables play an important role in access control and thus
the security of contracts, as demonstrated by the fix proposed
line~11 of \fref{fig:motiv} for our incorrect wallet implementation.
We use a reduced product of two domains for addresses:
a powerset of address constants -- useful to
precisely handle addresses hard-coded in a contract -- and
a domain that maintains whether the address equals \michelinstr{$sender} or not.
It is useful to handle precisely access control by comparison with
\michelinstr{$sender}, which is not a literal constant.
\section{Execution Model and Analysis}\label{sec:executionmodel}
The previous section presented domains sufficient to handle the
execution of arbitrary Michelson code on an input stack.
In this section, we take into account the execution in its context
on the blockchain.
A contract can be executed multiple times, making its storage evolve
during time.
Additionally, one execution can trigger additional contract executions
through the operation list it returns.
\subsection{Execution Context}
Once deployed (originated) on the blockchain, smart-contracts are available for
any user to call.
A call must provide a parameter as well as an entry point for the contract.
As different entry points execute very different code, MOPSA performs a case
analysis: for each entry point, the contract is analyzed on an initial stack
for this entry point with a corresponding abstract parameter value modeling
any possible actual value in the parameter type; the results are then joined
after execution.
The execution context also sets up special variables, such as \michelinstr{$sender},
modeling the contract caller and initialized with a symbolic value in the
address domain (\sref{sec:addresses}).
An analysis of the contract on an initial, empty storage would only model the
very first execution of the contract, which is not sound.
For instance, in \fref{fig:motiv}, an empty storage means
that the \emph{withdraw} entry point always fails.
Alternatively, starting with an abstract storage representing all possible
concrete values in its type could be imprecise.
\sectionref{sec:multcalls} will propose another solution where a sound
abstraction of the storage is inferred through fixpoint computation.
\subsection{Operation List}
In addition to the updated storage, Michelson contracts can return
a list of operations to execute after they finish.
These operations can be some calls to smart-contracts, which entails
executing these contracts with the updated storage.
These can, in turn, append new operations to the operation list.
The list is traversed in depth-first order until there are no
more operations to execute.
Note that a contract cannot call another contract in the middle of its
execution and expect a return value; moreover, the execution of the operation
list is atomic: a runtime error at any point reverts all modifications to
the storage of the contracts involved.
This unusual execution model makes reentrancy bugs, such as the one
plaguing as \emph{The DAO}~\cite{del2016dao}, less likely on Tezos.
MOPSA has partial support for this model.
We do compute the operation list and iterate contract execution in a
fixpoint, using updated storage and inferred entry points and arguments,
as mandated.
This includes the cases where a contract calls itself, or another contract.
However, our coarse abstraction of lists using summary variables
(\sref{sec:types}) makes the analysis impractical when a contract calls
more than one contract.
It should be addressed in future work.
\subsection{Multiple Calls Analysis}
\label{sec:multcalls}
Analyzing a unique call to a smart-contract provides
some insights on the possible runtime errors, but it does not take
into account all possible executions over its whole lifetime.
We developed an analysis to over-approximate an infinite number of calls to a
smart-contract, from different callers and to multiple entry points.
Let \emph{Addr} be the set of all addresses, \emph{Entrypoints} the set of entry points for the
contract and $P_e$ the semantic function computing the new storage
after executing entry point $e$ of contract \emph{P}. The next storage $S_{i+1}$ of
the contract as a function of its current storage $S_i$
(assuming, for the simplicity, an empty list of operations) is:
$$\mathrm{S}_{i+1} = \exists \mathrm{addr} \in \mathrm{Addr}, \exists \mathrm{e} \in \mathrm{Entrypoints}, \mathrm{call(P_e, addr, S_{i})}$$
Using the (classic) technique of iterations with widening, with \$sender being
an abstract value of our address reduced product from \ref{sec:addresses}, our
analysis computes an abstraction of the fixpoint:
$$\mathit{lfp}_{S_0} \left( \lambda S: \bigsqcup_{\forall e \in Entrypoints}
\mathrm{call(P_e, \$sender, S)} \right)$$
which models arbitrary sequences of executions of the contract from the initial
storage $S_0$ .
It thus outputs all possible runtime errors.
It also returns an invariant over storage values, which could be inspected by
the user for additional insight on the behavior of the smart-contract.
On the example of \fref{fig:motiv}, we discover that the \emph{deposit}
entry point will fill an initially empty map and allow some \emph{user} to
call the \emph{withdraw} entry point without entering the
\emph{failure} state.
This is not sufficient yet to prove our property of interest, that
``only owners can decrease the amount of tokens in the map.''
\section{High-Level Domains}
\label{sec:higher}
We now present additional abstract domains, bringing more precision
necessary to analyze our motivating example.
\subsection{Symbolic Expressions}
\begin{figure}[tb]
\begin{lstlisting}[xleftmargin=10pt]
// assuming a stack containing values x :: y;
DIP { DUP } // x :: y :: y
DUP; // x :: x :: y :: y
DUG 2; // x :: y :: x :: y
COMPARE; // pops 2 items, push -1, 0 or 1
EQ; // boolean test if -1, 0 or 1 equals to 0
IF // pops boolean and branches accordingly
{ } // x = y
{ } // x != y
\end{lstlisting}
\caption{Comparison in Michelson}\label{fig:compare}
\end{figure}
In Michelson, there is no direct comparison operator.
Consider the example in \fref{fig:compare} that executes different branches
when the topmost stack elements \texttt{x} and \texttt{y} are equal, and when
they are different.
The \michelinstr{COMPARE} polymorphic instruction pushes $-1$ (resp. $0$, $1$)
on the stack when one operand is smaller than (resp. equal to, greater than)
the other.
Then, an integer operation such as \michelinstr{EQ} compares the result to $0$
and pushes a boolean on the stack, which is consumed by \michelinstr{IF}.
To be precise, an analysis must track this sequence of instructions.
Using the domains presented in \sref{sec:simpleanalysis}, our analysis is only
able to infer that \emph{true} or \emph{false} is pushed on the stack and
immediately consumed, inferring no information on the
topmost stack values \texttt{x} and \texttt{y} inside the branches.
As an alternative to developing a complex relational domain, we implemented
the symbolic constant abstract domain proposed in \cite{mine-VMCAI06}.
This domain assigns to each variable a value $v$ from the set of symbolic
expressions $\mathbb{E}$,
or $\top$ to represent no information: $v \in \{ e, \top \}, e \in \mathbb{E}$.
The mapping is updated through assignments, building more complex expressions
by substitution.
This domain allows reconstructing dynamically high-level
expressions from low-level stack-based evaluation, without requiring
a static pre-processing phase as done for instance by \cite{sawja}
on Java bytecode.
In our example, just before the \michelinstr{IF} instruction, the top of the
stack contains the expression \emph{eq(compare(x, y))}, which allows
\michelinstr{IF} to apply flow-sensitive constraints on
the \emph{x} and \emph{y} values.
\subsection{Equality Domain}
A stack-based execution model entails pushing copies of existing values
from the stack (using \michelinstr{DUP}), to be consumed
later by operators while leaving the original values intact for future use.
This can be seen, for instance, in \fref{fig:compare}.
In this context, maintaining information about variable equalities is
critical for precision: it allows any information inferred on one
copy to be propagated to other copies.
The symbolic expression domain from the last section helps to a degree, as it
allows substituting \texttt{x} with \texttt{y} after an assignment
\texttt{x := y}, which is sufficient for the case in \fref{fig:compare}.
However, this substitution mechanism is unidirectional and can fail
when the symmetry or the transitivity of equality is required.
Equalities can be tracked by numerical abstract domains, such as polyhedra,
but this is limited to numeric values, while we require tracking the
equality of values of complex types (such as maps, for the example
from \fref{fig:motiv}).
To solve this problem, we developed a simple domain able to infer
variable equalities.
It maintains a set of equivalence classes for variables that are
known to be equal. It proved to be more reliable than symbolic
expressions for the specific purpose of tracking equalities on non-numeric
variables.
\subsection{Symbolic Maps}\label{sec:smap}
On the example \fref{fig:motiv}, we want to prove that
only the owner of an account stored in the storage map can reduce its amount.
This requires inferring a numerical property about the contents of a map.
However, this is not a uniform property: the property on the value depends on
whether the key associated to it equals the \michelinstr{$sender} address or not.
Hence, the simple summarization abstraction of \sref{sec:types}
is not expressive enough.
We propose a map abstraction of the form:
$\{\mathit{sender}\mapsto\mathit{amount},
\neg\mathit{sender}\mapsto\mathit{namount}\}$
that uses two variables per map:
$\mathit{amount}$ represents the value associated to the key equal
to \michelinstr{$sender} for this call;
$\mathit{namount}$ summarizes all the values associated to other keys.
Like previous abstractions, $\mathit{amount}$ and $\mathit{namount}$
are variables, the values of which are abstracted in \michelinstr{mutez} domain.
Using a relational domain, it is possible to even track relations between
different versions and copies of the map from the storage.
All map operations are translated into operations on these variables,
depending on whether the key used to access the map equals \michelinstr{$sender}
or not, which can be precisely tested using our symbolic address domain
(\sref{sec:addresses}).
In our example, when updating the value of $\mathit{namount}$,
we check that the new value is greater than or equal to the previous one.
This is always the case for \emph{deposit} (even if the \emph{key} address
was specified in parameter), as transferred amounts are always non-negative.
As for \emph{withdraw}, updating the old value with
the value \texttt{(owned - asked)} triggers an error for the
original version without the fix.
For the fixed version, we are able to prove that the property is correct because the value of
$\mathit{namount}$ is unchanged in \emph{withdraw}.
\section{Experimental Results}
\label{sec:experiments}
\begin{table}[t]
\vspace{2mm}
\caption{Experimental evaluation}\label{table:exp}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\hline
Analysis & intv & poly & intv+exp & poly+exp \\
\hline
\hline
total contracts & 2931 & 2833 & 1579 & 1549 \\
\hline
mutez overflow & 2824 & 1967 & 411 & 308 \\
\hline
shift overflow & 10 & 10 & 9 & 9 \\
\hline
always fail & 32 & 33 & 32 & 33 \\
\hline
min. time & 0.076s & 0.15s & 0.17s & 0.16s \\
\hline
max. time & 71.02s & 568.25s & 581.34s & 590.86s \\
\hline
avg. time & 3.31s & 29.8s & 26.43s & 21.89s \\
\hline
\end{tabular}
\end{table}
We performed two kinds of experiments.
Firstly, we analyzed a large set of existing contracts for non-functional correctness
(e.g., absence of overflows) to assess the practicality and scalability
of our method.
Secondly, we analyzed more specifically the example from \fref{fig:motiv} for
our functional specification: only the owner of an account can decrease its amount.
We used a Xeon E5-2650 CPU with 128GB memory.
Our prototype can be found at \url{https://gitlab.com/baugr/mopsa-analyzer}
at commit tag \texttt{soap22}.
We selected the Carthagenet test network containing 2935 contracts with size
ranging from 1 to 3604 lines, and analyzed them with arbitrary storage.
The results, using different domain combinations, is presented in
\tref{table:exp}:
\emph{intv} uses the domains from \sref{sec:simpleanalysis} and the interval domain;
\emph{poly} adds the polyhedra domain;
\emph{intv+exp} and \emph{poly+exp} add the domains from \sref{sec:higher}.
The first line indicates the number of successful analyses (not all domains can support
all contracts due to the prototype nature of our implementation).
The lines \emph{mutez overflow} and \emph{shift overflow} indicate the number of runtime errors detected, whereas \emph{always fail} is the number of contracts always terminating in a failing state.
The last three lines indicate the minimal, maximal and average runtime per contract.
We expect that a large number of \emph{mutez overflows} are actually false positive
as the analysis assumes that arbitrary 63-bit amounts can be stored
and transferred but, in fact, the total number of \micheltype{mutez}
in circulation is far smaller.
Our prototype can check the functional correctness of our motivating
example from \fref{fig:motiv} in 0.273s.
As for other examples, it raises spurious overflows in \micheltype{mutez}
computations.
\section{Conclusion}
\label{sec:conclusion}
We have proposed a new sound and efficient static analysis based on
Abstract Interpretation for the Michelson smart-contract language.
Our prototype implemented in MOPSA is already able to analyze realistic
smart-contracts for runtime errors, and higher-level functional properties for
toy contracts using realistic authentication patterns.
Future work include strengthening our implementation to analyze more contracts,
as well as our support of operation lists for inter-contract analysis.
We will also focus on analyzing functional correctness properties, closing the gap between
our simplified example and the actual Dexter implementation, and considering
other smart-contracts and properties.
This entails developing more expressive domains, e.g. extending our non-uniform
map abstraction to arbitrary value types,
and supporting more complex authentication patterns, such as
cryptographic signatures.
Finally, we plan to exploit the value analysis to perform a gas
consumption (\textit{i}.\textit{e}. timing) analysis.
\newpage
\bibliographystyle{ACM-Reference-Format}
|
{
"timestamp": "2022-10-12T02:09:51",
"yymm": "2210",
"arxiv_id": "2210.05217",
"language": "en",
"url": "https://arxiv.org/abs/2210.05217"
}
|
\section{Introduction}
Over the course of the last decades, different physical platforms have been developed to perform quantum informational tasks. The requirements for such platforms have been formalized at various occasions. One of the most often used list of requirements was formulated by DiVincenzo, and includes among other things the capability of robust state preparation, state manipulation and measurements \cite{divincenzo2000physical}. While most platforms available today are limited to the generation and manipulation of qubits, it has become more and more evident over last years that higher-dimensional systems can provide advantages in specific tasks including quantum communication \cite{bruss2002optimal, cerf2002security, durt2004security, sheridan2010security, coles2016numerical,ferenczi2012symmetries} and quantum computing (see Ref.~\cite{wang2020qudits} and references therein). However, embedding these additional degrees of freedom in trapped ions or superconducting qubits has been challenging \cite{low2020practical, strauch2011quantum}. In contrast, time-energy degrees of freedom in photonic states can be used to encode very high-dimensional quantum states. Furthermore, using quantum pulse gates allows for robust and precise manipulation and measurement of these signals, and therefore provides a viable candidate for a platform for high-dimensional quantum tasks \cite{brecht2015photon}.
In this paper we develop theoretical tools that help to characterize the state preparation capabilities of these and similar setups. To that end, we aim to provide easily measurable Schmidt number witnesses for such setups by developing an algorithm to generate witness candidates that use only few of the experimentally available measurement settings. We then apply the algorithm to the measurements available in the setup described in Ref.~\cite{brecht2015photon} and show that the obtained observable indeed is a proper Schmidt number witness, i.e., it certifies the Schmidt number of the generated quantum state. While proposals exist to certify Schmidt numbers with measurements in only two local bases \cite{bavaresco2018measurements}, these ideas do not apply to the setup at hand, as here only certain linear combinations of the entries of the density matrix can be measured.
The paper is organized as follows. In section~\ref{sec:SNW} we review the notion of Schmidt numbers and Schmidt number witnesses, in section~\ref{sec:algorithm} we develop and explain the algorithm that generates Schmidt number witness candidates. In Section~\ref{sec:setup} we apply the algorithm to obtain a Schmidt number witness using only $\mathcal{O}(d)$ measurement settings to certify the Schmidt number of the generated states, and compare its noise robustness to that of other Schmidt number witnesses. In the Appendix, we provide an explicit construction
for an experiment using photonic temporal modes.
\section{Schmidt numbers and Schmidt number witnesses}\label{sec:SNW}
Throughout this paper, we consider bipartite quantum systems in $\mathbb{C}^d \otimes \mathbb{C}^d$. We denote the set of linear maps from $\mathbb{C}^d$ to itself by $\mathcal{M}_d$.
The Schmidt number of a bipartite quantum state is an entanglement measure that is related to the hardness of generating a quantum state using local operations and classical communication \cite{nielsen1999conditions}. For pure states, it is defined as the number of non-vanishing Schmidt coefficients in the Schmidt decomposition of the state, i.e., for every bipartite quantum state $\ket{\psi}$, written in the computational basis as
\begin{align}
\ket{\psi} = \sum_{i,j=0}^{d-1} c_{ij} \ket{i}_A\ket{j}_B,
\end{align}
one can find local orthonormnal bases, $\{\ket{\underline{i}}_A\}, \{\ket{\underline{j}}_B\}$ for subsystems $A$ and $B$, respectively, such that
\begin{align}
\ket{\psi} = \sum_{i=0}^{k-1} \lambda_i \ket{\underline{i}}_A\ket{\underline{i}}_B,
\end{align}
where $\lambda_i \in \mathbb{R}, \lambda_i >0$ and $\sum_{i=0}^{k-1} \lambda_i^2 = 1$. The $k\leq d$ non-vanishing numbers $\lambda_i$ are called Schmidt coefficients of $\ket{\psi}$, and $k$ is called the Schmidt rank of $\ket{\psi}$, or $\operatorname{SR}(\ket{\psi})$ in short \cite{guhne2009entanglement}.
In order to generalize this measure to mixed states $\rho$, one uses the convex roof construction of the Schmidt rank, called Schmidt number, or $\operatorname{SN}(\rho)$ \cite{terhal2000schmidt}:
\begin{align}
\operatorname{SN}(\rho) := \min_{\rho = \sum_i p_i \ketbra{\psi_i}} \max_i \operatorname{SR}(\ket{\psi_i}),
\end{align}
i.e., it is given by the maximal Schmidt rank within a given decomposition of $\rho$, minimized over all decompositions.
Due to the minimization, it is usually hard to calculate the Schmidt number of a given quantum state.
For a bipartite system of dimension $d\times d$, the maximal Schmidt number is given by $d$, and one can define the sets of Schmidt number $k$ as
\begin{align}
S_k = \{\rho\,:\,\operatorname{SN}(\rho) \leq k\}.
\end{align}
Clearly, $S_k \subset S_{k+1}$, and $S_1$ is the usual set of separable states. The maximally entangled state $\ket{\phi^+} = \frac{1}{\sqrt{d}} \sum_{i=0}^{d-1} \ket{ii}$ is member of $S_d$, but not $S_{d-1}$, and it can be shown that \cite{terhal2000schmidt, horodecki1999reduction}
\begin{align}\label{eq:phipoverlap}
\braket{\phi^+|\rho_k|\phi^+} \leq \frac kd
\end{align}
for all states $\rho_k \in S_k$.
In order to certify a specific Schmidt number experimentally, it is useful to define an analogon to entanglement witnesses for Schmidt numbers \cite{sanpera2001schmidt}. We say that an observable $W_k$ is a Schmidt-number-$k$ witness, if
\begin{itemize}
\item $\operatorname{Tr}(W_k \rho_k) \geq 0$ for all $\rho_k \in S_k$,
\item $\operatorname{Tr}(W_k \rho) < 0$ for at least one $\rho$.
\end{itemize}
Thus, whenever one finds in an experiment that $\operatorname{Tr}(W_k \rho) < 0$, then $\rho$ must have at least Schmidt number $k+1$. Note that for $k=1$, we recover the usual notion of an entanglement witness \cite{guhne2009entanglement}. In order to certify that a given observable is a Schmidt-number-$k$ witness, it is sufficient to minimize its overlap w.r.t.~pure states in $S_k$ and show that the minimum is non-negative. This can be seen from the fact that the sets $S_k$ are compact, thus, one can find a (potentially mixed) optimal quantum state $\rho_k^\star$ such that $c_k := \min_{\rho_k \in S_k} \operatorname{Tr}(W_k \rho_k) = \operatorname{Tr}(W_k\rho_k^\star)$. As $\operatorname{SN}(\rho_k^\star) \leq k$, it exhibits a decomposition $\rho_k^\star = \sum_i p_i \ketbra{\psi_i}$ with $\operatorname{SR}(\ket{\psi_i}) \leq k$. Thus, $c_k = \sum_i p_i \braket{\psi_i | W_k | \psi_i} \geq \sum_i p_i c_k = c_k$, implying that the inequality is an equality and each of the pure $\ket{\psi_i}$ achieves the same minimal value of $c_k$.
The observation of Eq.~(\ref{eq:phipoverlap}) can be directly transformed into a Schmidt-number-$k$ witness via the observable \cite{sanpera2001schmidt} \begin{align}\label{eq:standardwitness}
W_k = \mathds{1}_{d^2} - \frac dk \ketbra{\phi^+}.
\end{align}
We refer to this witness as the standard Schmidt-number witness, and we will compare our constructions to this one in the end.
The problem remains that in general, the minimization over pure states with fixed Schmidt rank remains challenging, and in many cases no analytical solution can be found. This can be remedied by relaxing the optimization slightly.
To that end, note that there exists a characterization of the set $S_k$ in terms of positive maps \cite{terhal2000schmidt}: It holds that $\rho \in S_k$ if and only if $(\mathds{1}_{d} \otimes \Lambda_k)(\rho) \geq 0$ for all $k$-positive maps $\Lambda_k\,:\,\mathcal{M}_d \rightarrow \mathcal{M}_d$, where $\mathds{1}_d$ denotes the identity map in dimension $d$. A map $\Lambda_k$ is called $k$-positive, if $\mathds{1}_{k} \otimes \Lambda_k$ is a positive map.
This characterization is useful, as it allows to define slightly larger sets than $S_k$, which can be characterized with less effort. To that end, we define the generalized reduction map \cite{guhne2009entanglement, terhal2000schmidt}
\begin{align}\label{eq:reductionmap}
R_p(\rho) = \operatorname{Tr}(\rho)\mathds{1}_d - p\rho,
\end{align}
where $\rho \in \mathcal{M}_d$ is a single qudit mixed state. It was shown that $R_p$ is $k$-positive, but not $k+1$-positive, iff $p\in(\frac1{k+1}, \frac1k]$ \cite{terhal2000schmidt, tomiyama1985geometry}. For $p=1$ one recovers the usual reduction map. We use it to define
\begin{align}\label{eq:SkR}
S_k^R = \{\rho\,:\,(\mathds{1}_d \otimes R_{\frac1k})(\rho) \geq 0\}.
\end{align}
Using the relation between $k$-positive maps and states of Schmidt number $k$, it is clear that $S_k \subset S_k^R$. Thus, we can use these sets as an outer approximation of $S_k$, as they can be characterized by semi-definite constraints. The general embedding situation is displayed in Fig.~\ref{fig:sets1}.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{sets1.png}
\caption{The sets $S_k$ of states of Schmidt number $k$ for $d=3$, as well as their outer approximations $S_k^R$ defined in Eq.~(\ref{eq:SkR}). }
\label{fig:sets1}
\end{figure}
We use this fact to find a lower bound on the optimal value of $c_k = \min_{\rho_k \in S_k}\operatorname{Tr}(W_k \rho_k) \geq \min_{\rho_k \in S_k^R}\operatorname{Tr}(W_k \rho_k)$. The latter optimization over $S_k^R$ is in fact a semi-definite program (SDP), that is efficiently solvable on a computer and facilitates further analytical insights by converting it from its primal into its dual form, given by
\opti{max}{y, S}{y\label{eq:dualsdp}}{S^\dagger=S, S\geq0,\nonumber;W_k -(\mathds{1} \otimes R_{\frac1k})(S)\geq y\mathds{1}.\nonumber}
This is equivalent to $\max_{S\geq 0} \lambda_{\text{min}}[W_k - (\mathds{1} \otimes R_{\frac1k})(S)]$. For this SDP it is possible to show strong duality, meaning that both the primal and the dual optimal values coincide. Strong duality holds thanks to Slater's condition \cite{vandenberghe1996semidefinite}: Both the primal and the dual have full rank feasible points, namely $\rho=\mathds{1} / d^2$ and $S=\mathds{1}$. Indeed, any feasible choice of $S$ provides a proper lower bound for $c_k$, allowing one to shift $W_k$ such that it constitutes a proper Schmidt-number-$k$ witness.
\section{Construction of Schmidt number witnesses}\label{sec:algorithm}
We now turn to the question of how to construct a Schmidt-number-$k$ witness for a given setup and a given target state $\ket{\psi_0}$ that the setup aims to prepare. The goal is to minimize the experimental requirements to evaluate $\operatorname{Tr}(W_k \rho_0)$. Here, as a figure of merit, we limit the number of measurement settings needed to evaluate the overlap. Motivated from the specific setup considered in the next chapter, we restrict ourselves to projective measurements in the standard basis.
We write our witness as $W_k=\sum_{i,j,k,l=0}^{d-1} W_{ijkl} \ketbraa{ij}{kl}$ and denote the set of the indices of those coefficients which are experimentally accessible by $M \subset \{(i,j,k,l)\,:\,0\leq i,j,k,l < d\}$. As a first candidate for our witness, we start by formulating the following semi-definite program:
\opti{find}{}{\tilde{W}\label{eq:sdpstep1}}{\tilde{W}^{\dagger} = \tilde{W},\nonumber;\braket{\psi_0|\tilde{W}|\psi_0} = -1,\nonumber;-\mathds{1} \leq \tilde{W} \leq \mathds{1},\nonumber;\tilde{W}_{ijkl} = 0\quad\forall (i,j,k,l)\notin M.\nonumber}
The result of this program is an operator $\tilde{W}^{(1)}$ that has as a negative eigenvalue of $-1$ corresponding to the state $\ket{\psi_0}$, which is the state we want to detect. Note that we introduce the upper bound of $\tilde{W}^{(1)} \leq \mathds{1}$ as to ensure a bounded result. Of course, $\tilde{W}^{(1)}$ lacks the property of having positive expectation value w.r.t. states of Schmidt number $k$. To implement that we first fix $k$ and numerically minimize $\braket{\phi_{k}|\tilde{W}^{(1)}|\phi_{k}}$ over pure states $\ket{\phi_{k}}$ of Schmidt rank bounded by $k$, by explicitly parameterizing the Schmidt bases and coefficients and using gradient descent. We use this optimized (but possibly not optimal) state $\ket{\phi_{k}^{(1)}}$ to add another constraint to the SDP in Eq.~(\ref{eq:sdpstep1}), namely
\begin{align}\label{eq:constraintC}
\braket{\phi_{k}^{(1)} | \tilde{W} | \phi_{k}^{(1)}} \geq C,
\end{align}
where $C\in (-1,1]$, as to separate the state $\ket{\phi_{k}^{(1)}}$ from the target state $\ket{\psi_0}$ as much as possible. This threshold value of $C$ in general cannot be chosen equal to $0$, as the constraint $\tilde{W} \leq \mathds{1}$, introduced to make the optimization bounded, constrains the spectrum of $\tilde{W}$. Therefore, $C$ should be chosen as large as possible without rendering any of the SDPs in the algorithm infeasible. In practice, we start with a randomly chosen $C$, which stays fixed over the course of the algorithm until it stops as described below, before it is updated accordingly.
We then rerun the SDP to obtain a new candidate witness $\tilde{W}^{(2)}$, minimize again numerically the overlap with Schmidt rank $k$-states, yielding the state $\ket{\phi_{k}^{(2)}}$. We add the constraint $\braket{\phi_{k}^{(2)} | \tilde{W} | \phi_{k}^{(2)}} \geq C$ with the same $C$ as before to the SDP, and repeat the whole process, until
\begin{itemize}
\item either, the SDP turns infeasible at some point. In this case, the threshold $C$ was chosen too large and has to be reduced. We then rerun the whole algorithm.
\item or, the series of SDP converges to some $\tilde{W}^{(\infty)}$, for which no more states $\ket{\phi_{d-1}}$ with expectation value below $C$ can be found. In this case, $C$ can be increased and rerun again.
\end{itemize}
The whole algorithm is illustrated in Fig.~\ref{fig:sets2}.
Note that the algorithm usually stops after few iterations.
We stop the whole process, when the threshold value $C_k$ for $C$ has been determined sufficiently accurate using a divide-and-conquer algorithm, such that the SDP iterations remain barely feasible. The result of the algorithm is an operator $\tilde{W}^{(\infty)}$, which probably has the property of $\braket{\phi_k|\tilde{W}^{(\infty)}|\phi_k} \geq C_k$ for all Schmidt rank $k$ states $\ket{\phi_k}$. However, this is not guaranteed yet, as the optimization over these states within the algorithm is just numerical and there is no way to be sure that the true minimum was found. The proof that this property really holds has to be done analytically after obtaining a promising candidate.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{sets2.png}
\caption{Illustration of the iterative algorithm described in the main text for $d=3$, $k=2$ and $\ket{\phi_0} = \ket{\phi^+}$. Initially, $\tilde{W}^{(1)}$ is found by the SDP in Eq.~(\ref{eq:sdpstep1}). Due to the constraint $\braket{\phi^+|W|\phi^+}=-1$, all $\tilde{W}^{(i)}$ must be tangent to the circle of radius 1 around the target state. For $\tilde{W}^{(1)}$, we then find the pure state $\ket{\phi_2^{(1)}}$ of Schmidt rank 2 that is furthest away from it, and add the constraint in Eq.~(\ref{eq:constraintC}) for it. After three iterations, no more such states can be found and $\tilde{W}^{(\infty)}$ is shifted to a proper witness candidate via Eq.~(\ref{eq:WtildetoWk}).}
\label{fig:sets2}
\end{figure}
Finally, $\tilde{W}^{(\infty)}$ can be shifted to yield a proper Schmidt-number-$k$ witness via
\begin{align}\label{eq:WtildetoWk}
W_k = \frac1{1+ C}\tilde{W}^{(\infty)} - \frac{C}{1+C}\mathds{1}_{d^2},
\end{align}
to ensure the property $\operatorname{Tr}(W_k\rho_k)\geq 0$ for all $\rho_k\in S_k$.
\section{Finding a witness that requires few measurement settings}\label{sec:setup}
We now apply the algorithm to find a witness that requires less measurements than the standard witness. For many experimental setups, it is reasonable to assume that the number of required measurement settings scales linearly with the number of matrix elements that are measured. Thus, we try different sets of $M$ of available coefficients in the witness, where the size of $M$ scales linearly in $d$. It turns out that the choice
\begin{align}
M &= \{(0,0,j,j)~|~0\leq j < d\} \cup \{(j,j,0,0)~|~0< j < d\} \cup \nonumber \\
& \cup \{(0,j,0,j)~|~0< j < d\}\cup \{(j,0,j,0)~|~0< j < d\}\cup \nonumber \\
& \cup\{(j,j,j,j)~|~0< j < d\}
\end{align}
yields good results while keeping the number of measurements in the order of $\mathcal{O}(d)$. We run the algorithm presented in the last section for $d=3,4,5$ and $6$ and different $k$, and obtain the following candidate for the witness, which will be shown to be a proper witness below:
\begin{align}\label{eq:shiftedWtilde}
W_k = \frac1{1-\vert C\vert_k}\tilde{W} + \frac{\vert C\vert_k}{1-\vert C\vert_k}\mathds{1},
\end{align}
with
\begin{align}\label{eq:Ck}
\vert C\vert_k = \sqrt{\frac{d^2-4d+4k}{d^2}}
\end{align}
and
\begin{align}\label{eq:Wtilde}
\tilde{W} &= \left(1-\frac2d\right)\ketbra{00} + \delta_{d,3}\sum_{i=1}^{d-1}(\ketbraa{0i}{0i} + \ketbraa{i0}{i0}) - \nonumber \\
&-\frac2d\sum_{i=1}^{d-1}(\ketbraa{00}{ii} + \ketbraa{ii}{00}) - \left(1-\frac2d\right)\sum_{i=1}^{d-1} \ketbra{ii}.
\end{align}
The fact that the algorithm yields the same $\tilde{W}$ for each choice of $k$, where only the thresholds $\vert C\vert_k$ vary, allows to run a single experiment to measure the overlap with $\tilde{W}$ and deduce a lower bound on the Schmidt number from that.
The values $\vert C\vert_k$ correspond to the absolute value of the likely minimal overlap of $\tilde{W}$ with states $\ket{\phi_k}$ of Schmidt number $k$, which, according to the numerical minimization, are achieved by states of the form
\begin{align}\label{eq:specificfamilyk}
\ket{\phi_k(\alpha)} = \alpha\ket{00} + \sqrt{\frac{1-\alpha^2}{k-1}} \sum_{i=1}^{k-1} \ket{ii}.
\end{align} Minimizing over this family of states yields $\min_{\alpha} \braket{\phi_k(\alpha)|\tilde{W}|\phi_k(\alpha)} = -\vert C\vert_k$. Note that this could be non-optimal and provides just an upper bound on the correct value of $C_k$. The last step is to show that this is indeed optimal, implying that the obtained candidate is actually a proper witness. To show this, we deduce that the lower bound from the dual SDP approximation using the sets $S_k^R$ coincides with the upper bound of $\vert C\vert_k$.
\begin{table}[t]
\centering
\begin{tabular}{r||r|r}
$d=4$ & $\vert C\vert_k$ & $\vert C\vert_k^R$ \\
\hline
\hline
$k = 1$ & 0.500 & 0.530 \\
$k = 2$ & 0.707 & 0.715 \\
$k = 3$ & 0.866 & 0.866 \\
$k = 4$ & 1.000 & 1.000 \\
\hline
\rule{0pt}{4ex}
$d=7$ & $\vert C\vert_k$ & $\vert C\vert_k^R$ \\
\hline
\hline
$k = 1$ & 0.714 & 0.734 \\
$k = 2$ & 0.769 & 0.772 \\
$k = 3$ & 0.821 & 0.821 \\
$k = 4$ & 0.869 & 0.869 \\
$k = 5$ & 0.915 & 0.915 \\
$k = 6$ & 0.958 & 0.958 \\
$k = 7$ & 1.000 & 1.000 \\
\end{tabular}
\begin{tabular}{r||r|r}
$d=11$ & $\vert C\vert_k$ & $\vert C\vert_k^R$ \\
\hline
\hline
$k = 1$ & 0.818 & 0.825 \\
$k = 2$ & 0.838 & 0.839 \\
$k = 3$ & 0.858 & 0.858 \\
$k = 4$ & 0.877 & 0.877 \\
$k = 5$ & 0.895 & 0.895 \\
$k = 6$ & 0.914 & 0.914 \\
$k = 7$ & 0.932 & 0.932 \\
$k = 8$ & 0.949 & 0.949 \\
$k = 9$ & 0.966 & 0.966 \\
$k = 10$ & 0.983 & 0.983 \\
$k = 11$ & 1.000 & 1.000
\end{tabular}
\caption{The conjectured optimal threshold values $\vert C\vert_k$ from numerical minimization over Schmidt rank $k$ states, and the proven thresholds $\vert C\vert_k^R$ for $\tilde{W}$ in Eq.~(\ref{eq:Wtilde}) using the outer approximations $S_k^R$ and the dual of the SDP optimization: Whenever measuring an expectation value $\operatorname{Tr}(\tilde{W}\rho) < -\vert C\vert_k^R$, the state $\rho$ has at least Schmidt number $k+1$. Note that for $k=1$, a different proof exists that establishes that a violation of $\vert C\vert_1$ instead of $\vert C\vert_1^R$ suffices to detect entanglement.}
\label{tab:thresholds}
\end{table}
\begin{proposition}
For $k\neq 2$ and $d\geq4$, the witness in Eq.~(\ref{eq:shiftedWtilde}) is a Schmidt-number-$k$ witness.
\end{proposition}
\begin{proof}
Recall that a proper lower bound on $C_k$ is given by the SDP
\opti{min}{\rho}{\operatorname{Tr}(\tilde{W} \rho)}{\rho^\dagger = \rho, \operatorname{Tr}(\rho) = 1, \rho \geq 0,;(\mathds{1} \otimes R_{\frac1k})(\rho) \geq 0,}
where $R_p$ is given in Eq.~(\ref{eq:reductionmap}). Thus, we effectively optimize over states in $S_k^R$. To solve this optimization, we convert this SDP into its dual form, given by Eq.~(\ref{eq:dualsdp}). Even though any feasible choice of $S$ provides a proper lower bound for $C_k$ such that $W_k$ is a witness, we aim to find the best choice, and numerically checking the optimal solutions for low values of $d$ seems to indicate that for $k\geq 2$, the optimal $S$ is given by $S=\ketbra{x(a,b)}$ with the unnormalized $\ket{x(a,b)} = a\ket{00} + b\sum_{i=1}^{d-1}\ket{ii}$ with $a,b\in \mathbb{R}$. Calculating the eigenvalues of $\tilde{W} - (\mathds{1} \otimes R_{\frac1k})(S)$, yields (upon ignoring multiplicities)
\begin{align}
\lambda_1 &= -a^2, \lambda_2 = -b^2, \lambda_3 = -b^2-\frac{d-2}{d},\\
\lambda_{4,5} &= \frac{1}{2dk}[p(a,b) \pm \sqrt{p(a,b)^2 - 4q(a,b)}]),
\end{align}
with $p(a,b) = d[(1-k)a^2+(d-1-k)b^2]$ and $q(a,b) = dk[d(k-d)a^2b^2 + (d-2)(k-1)a^2 +(d-2)(d-1-k)b^2 + 4(d-1)ab-dk]$.
The minimal value of these $\lambda_i$ can only be one of the three eigenvalues $\lambda_1=-a^2, \lambda_3 = -b^2-\frac{d-2}d$ and $\lambda_5=\frac{1}{2dk}[p(a,b) - \sqrt{p(a,b)^2 - 4q(a,b)}]$.
For $k=1$ and $k=2$, the maximum minimal value of these is given when the three values coincide, which yields as the optimal value the root of an even, quartic polynomial that can be found efficiently, but yields a value larger than the one in Eq.~(\ref{eq:Ck}) (see Table~\ref{tab:thresholds}). For $k\geq3$, we guess the optimal solution by choosing
\begin{align}
a &= \frac{1}{\sqrt{k-1}}\sqrt{\vert C\vert_k + \frac{d-2}{d}},\\
b &= \sqrt{\vert C\vert_k - \frac{d-2}{d}},
\end{align}
where $\vert C\vert_k$ is given in Eq.~(\ref{eq:Ck}).
These values are chosen such that $\lambda_3 = \lambda_5$, and for $d\geq3$ and $k\geq3$, it is easy to see that $\lambda_3 \leq \lambda_1$, yielding the lower bound of $\lambda_3 = -b^2-\frac{d-2}{d} = -\vert C\vert_k$ to the optimization problem. As this lower bound coincides with the upper bound values from the minimization over the specific family of states in Eq.~(\ref{eq:specificfamilyk}), it must be optimal.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{noise.png}
\caption{Detection power of the constructed witness in Eq.~(\ref{eq:Wtilde}) and the standard witness in Eq.~(\ref{eq:standardwitness}) for different dimensions as a function of white noise added to the state.}
\label{fig:noise}
\end{figure}
While for $k=1$ the bound from the SDP is not optimal, it is possible to carry out the optimization over pure product states in this case using Lagrange multipliers, and one recovers the value $-\vert C\vert_1$. This leaves the choice of $k=2$ being the only case where we could not prove the numerical result to be a proper witness.
\end{proof}
While the proof does not work for $k=2$, it can still give bounds for these cases, meaning that a proper (probably non-optimal) witness can be built.
In Table~\ref{tab:thresholds}, we list the values of $\vert C\vert_k$ compared to $\vert C\vert_k^R$ for different values of $d$. It shows that for $k=1$ and $k=2$, the difference between the reduction map value and the conjectured $\vert C\vert_k$ is very small.
We stress that the precise number of measurement settings required to measure this witness depends on the specific experimental setup. In Appendix~\ref{app:tm}, we show that for a specific setup using photon temporal modes \cite{brecht2015photon}, the evaluation of the standard witness requires $2d^2-d$ measurement settings, whereas the newly constructed witness requires only $5d-4$ of them.
While our witness requires less measurement settings, it comes with the price of a reduced noise robustness. We model this by determining the maximum amount of white noise that can be added to the maximally entangled state before the witness fails to detect its $k$-dimensional entanglement. In particular, the noisy state is given by $\rho(\epsilon) = (1-\epsilon)\ketbra{\phi^+} + \epsilon \frac{\mathds{1}_{d^2}}{d^2}$. The result is displayed in Fig.~\ref{fig:noise} for dimension $d=4$, $d=7$ and $d=11$. This indicates that our witness is
particularly useful in the low-noise regime.
\section{Conclusions}\label{sec:conclusions}
We presented an iterative algorithm that quickly generates candidates for Schmidt-number-$k$ witnesses which require only a number of measurement settings scaling linearly with the dimension. We applied the algorithm to find a witness candidate that requires knowledge of only $\mathcal{O}(d)$ of the matrix elements. To show that the numerical candidate is a witness, we employed a semi-definite program that gives lower bounds on the minimal overlap with states of certain Schmidt number, which in almost all cases turned out to be optimal, as it coincided with the upper bound obtained before.
While we applied the algorithm to a specific case, let us remark that it can be applied to many other experimental setups as well. The choice of setting certain coefficients of the witness to zero, for example, can be replaced by any set of linear or semi-definite constraints.
\acknowledgments
We thank Laura Serino, Sophia Denker and Otfried Gühne for fruitful discussions. The authors acknowledge support by the QuantERA project QuICHE and the German Ministry of Education and Research (BMBF
Grant No. 16KIS1119K).
\bibliographystyle{apsrev4-1}
|
{
"timestamp": "2022-10-12T02:11:44",
"yymm": "2210",
"arxiv_id": "2210.05272",
"language": "en",
"url": "https://arxiv.org/abs/2210.05272"
}
|
\section*{Acknowledgements}
This work is supported by the German Federal Ministry for Economic Affairs and Climate Action (BIMKIT, grant no. 01MK21001H).
\section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=.96\columnwidth]{fig1_architecture}
\caption{In CASAPose, a segmentation decoder estimates objects masks that guides a second decoder to predict 2D-3D correspondences for single-stage multi-object pose estimation.}
\label{fig:strcuture}
\end{figure}
In this paper, we present CASAPose, a novel architecture specifically designed to improve multi-object scalability of a 6D pose estimator. Retrieving the pose of objects in front of a camera in real-time is essential for augmented reality and robotic object manipulation. Convolutional neural networks (CNN) led to a significant boost in accuracy and robustness against occlusion or varying illumination. Many methods train a separate CNN per object \cite{Peng2019,Song2020,Tremblay2018,Zakharov2020,Park2019_2,Zhigang2019,Sundermeyer2019}. In scenarios with multiple objects, this leads to impractical side effects. If it is not known which objects are present in the image, it must either be inspected individually for each known object or an additional detector is required for identification. Performing inference for each visible object increases the computation time. In addition, the need for multiple networks increases memory usage and lengthens training. Pose estimation is often formulated as an extension of semantic segmentation, with additional output maps to infer the pose from, e.g.~3D object coordinates, or keypoint locations encoded in vector fields. The trivial multi-object extension to expand this secondary output for each object \cite{Peng2019} results in GPU-intensive slow training and performance degradation \cite{Gard2022}.
Our approach decouples semantic object identification and keypoint regression and uses the pixel-wise segmentation results as guidance for a keypoint predicting branch. We introduce techniques from GAN-based conditional image synthesis and style transfer to the field of object pose estimation. First, we improve the descriptive capability of the network by adding a small amount of class specific extra weights to the network. Applying these parameters locally in the decoder as a class-adaptive de-(normalisation) (CLADE) \cite{Tan2020} adds semantic awareness to the network. The locations of 3D keypoints on the object, represented by vectors pointing towards their 2D projection are interpreted as a local object-specific style. Second, we strengthen the local focus of the network by integrating two more semantic-aware operations \cite{Dundar2020}. A guided convolution re-weights the convolutions in the keypoint branch and forces the decoder to focus on the mask region, when estimating the 6D pose. A segmentation-aware upsampling uses the mask during upscaling of feature maps to avoid misalignments between low-resolution features and high-resolution segmentation maps. Both operations reduce the interference between mutually occluding objects and improve the quality of the pose estimates. From the clearly separated output, the 2D-3D correspondences are localised in a robust and differentiable manner via weighted least squares.
Since dataset creation for multi-object scenarios is very time consuming, we train only on synthetic data that comes with free labels and can be generated with free tools. To summarise, we make the following contributions:
\begin{enumerate}
\item We show that incorporating a small set of object-specific parameters through CLADE significantly increases the multi-object capacity of a pose estimation CNN.
\item We reduce the number of outputs of encoder-decoder based pose estimation networks and reduce intra-class interferences by segmen\-tation-aware transformations.
\item We exploit the strictly local feature processing to obtain a new differentiable 2D keypoint estimation method improving the accuracy of 2D-3D correspondences.
\item We train our network only on synthetic images and achieve state-of-the art results on real data.
\end{enumerate}
\section{Introduction}
\noindent We provide visual results of CASAPose in Section \ref{sec:visual}. Section \ref{sec:more_experiments} contains further comparisons with the state of the art as well as two more experiments. Finally, Section \ref{sec:additional_details} lists additional implementation details.
\section{Visual Results}
\label{sec:visual}
\quad Fig.~\ref{results_lmo} shows estimated poses for three example images from Occluded LINEMOD (LM-O) \cite{Brachmann2014} using CASAPose trained for 8 objects. Similarly, Fig.~\ref {results_pbr} gives an impression of example results using a model trained for 13 objects on our \textit{pbr} test scene \cite{Hodan2021}.
Example results of CASAPose for HomebrewedDB (HB) \cite{Kaskman2019} are shown in Fig.~\ref {results_hb}.
\begin{figure}[h!]
\centering
\subfigure{\label{lmo_fig:a}\includegraphics[width=40mm]{243_cuboids}}
\subfigure{\label{lmo_fig:b}\includegraphics[width=40mm]{310_cuboids}}
\subfigure{\label{lmo_fig:c}\includegraphics[width=40mm]{1180_cuboids}}
\caption{Example results of CASAPose for LM-O, with bounding boxes for correctly estimated poses in green, incorrect poses in red, and ground truth poses in blue (ADD/S metric).}
\label{results_lmo}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure{\label{pbr_fig:a}\includegraphics[width=40mm]{35_cuboids}}
\subfigure{\label{pbr_fig:b}\includegraphics[width=40mm]{49_cuboids}}
\subfigure{\label{pbr_fig:c}\includegraphics[width=40mm]{194_cuboids}}
\caption{Example results of CASAPose for \textit{pbr} images excluded during training, with bounding boxes for correctly estimated poses in green, incorrect poses in red, and ground truth poses in blue (ADD/S metric).}
\label{results_pbr}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure{\label{hb_fig:a}\includegraphics[height=40mm]{hb_primesense}}
\subfigure{\label{hb_fig:b}\includegraphics[height=40mm]{hb_kinect}}
\caption{Example results of CASAPose for HB, with bounding boxes for correctly estimated poses in green and ground truth poses in blue (ADD/S metric). The left image is captured with the PrimeSense Carmine camera; the right image is captured with Microsoft Kinect 2.}
\label{results_hb}
\end{figure}
\subsection{Effect of Guided Operations} \quad The ablation study discovered that semantic guidance improves the estimated vector fields and the accuracy of pose estimation. The effect is best seen in direct visual comparison. Fig.~\ref{improvements_clade} shows the enhancement exemplified for an image from the \textit{pbr} dataset using colour coded vector fields for visualisation. It shows the vector fields for the first keypoint, which is always located in the centre of each object.
Comparing the output of a model without the guided decoder in Fig.~\ref{clade_fig:b} with the output of a model with object-aware convolutions and object-aware upsampling in Fig.~\ref{clade_fig:c}, there is a clear improvement in vector fields, especially in regions where objects overlap. In fact, we have observed that a network without semantic guidance is not even able to produce perfectly separated vector fields when it heavily overfits only a few images. Using CLADE alone without semantic guidance already improves the quality of the vector fields per object due to the object-specific parameters (see Table \ref{tab:ablation}), but a clear separation as in Fig.~\ref{clade_fig:c} can only be achieved in combination.
\begin{figure}[h!]
\centering
\subfigure[input image and estimated mask]{\label{clade_fig:a}\includegraphics[width=.32\linewidth]{effect_guided/49_18_mask}}
\subfigure[estimated vector field (Base)]{\label{clade_fig:b}\includegraphics[width=.32\linewidth]{effect_guided/49_18_naive}}
\subfigure[estimated vector field (C/GCU)]{\label{clade_fig:c}\includegraphics[width=.32\linewidth]{effect_guided/49_18_clade}} \\
\subfigure[detail comparison 1]{\label{clade_fig:d}\includegraphics[width=.35\linewidth]{effect_guided/comparison_1}}
\subfigure[detail comparison 2]{\label{clade_fig:e}\includegraphics[width=.35\linewidth]{effect_guided/comparison_2}}
\caption{Visual comparison of the estimated vector fields for a network with (\textit{C/GCU}) and without (\textit{Base}) the semantically guided operations. In the detail comparisons, \textit{Base} is on the left, while \textit{C/GCU} is on the right.}
\label{improvements_clade}
\end{figure}
\subsection{Characteristics of the Learned Confidence Maps}
Fig.~\ref{keypoint_locations} shows the estimated vector fields and confidence maps for an image from LM-O using the 8-object model. The estimated 2D locations are highlighted by a white circle. The confidence values are normalised inside each semantic mask for clearer presentation. For the first keypoint (Fig.~\ref{kp:b}), which is always in the centre of the object, the confidence is relatively constant in each mask, indicating that it is easy for the network to predict this point with high accuracy. In Fig.~\ref{kp:c} and \ref{kp:d}, it can be seen that the regions where the network predicts high confidence are often spatially close to the actual keypoint location. For example, for the tip of the tail of 'cat' in Fig.~\ref{kp:c}, it is logical that the best prediction of the location can be made nearby. Moreover, for example, 'ape' in Fig.~\ref{kp:d} shows that the model predicts high reliability and thus computes the 2D position of a keypoint mainly from pixels near the object silhouette. Especially for non-textured objects, the silhouette provides important information about the orientation of the object. It seems appropriate that the vectors near the contour can be estimated with higher precision.
\begin{figure}
\centering
\subfigure[Image and estimated mask]{\label{kp:a}\includegraphics[width=.49\linewidth]{details/orig+mask}}
\subfigure[Keypoint \#1]{\label{kp:b}\includegraphics[width=.49\linewidth]{details/keypoint_0}}
\subfigure[Keypoint \#4]{\label{kp:c}\includegraphics[width=.49\linewidth]{details/keypoint_4}}
\subfigure[Keypoint \#5]{\label{kp:d}\includegraphics[width=.49\linewidth]{details/keypoint_5}}
\caption{Estimated vector fields and confidence maps for three out of nine keypoints.}
\label{keypoint_locations}
\end{figure}
\section{Additional Experiments}
\label{sec:more_experiments}
\subsection{BOP Challenge Evaluation}
\label{sec:experiment_bop}
In the BOP Challenge 2020 \cite{Hodan2021} multiple approaches submitted results for several pose estimation datasets, including LM-O with synthetic training. The results presented are in most cases significantly improved by subsequently inserted changes compared with the results from the original publications. We also evaluated our results against the BOP benchmark. It calculates an accuracy called Average Recall (\textbf{AR}), which is the average of the results for three pose error functions, Maximum Symmetry-Aware Projection Distance (MSPD), Maximum Symmetry-Aware Surface Distance (MSSD), and Visible Surface Discrepancy (VSD). Further details can be found in \cite{Hodan2021}.
Table \ref{tab_bop} lists the results of our procedure with this metric. CASAPose\textsubscript{8} is our final result for the 8-object case from the main paper. EPOS \cite{Hodan2020, Hodan2021}, the only other single-stage multi-object method achieved an \textbf{AR} of 54.7, slightly higher than CASAPose. CASAPose\textsubscript{8*} trained with minimally different hyper parameters (increasing $\lambda_4$ from $0.007$ to $0.01$), again achieves a slightly superior result, showing that both methods are similarly accurate. Still, our method is multiple times faster (468ms vs. 37 ms) \footnote{The difference is so significant, that it can also not be explained by our faster evaluation GPU.}. CDPN \cite{Zhigang2019, Hodan2021}, the best method using only rgb images and no additional refinement (EPOS would be the second best method on LM-O with these properties), reaches an \textbf{AR} of $62.4$, but trains a single network for every object. They make numerous extensions to their original approach (adding more complicated domain randomisation and a more powerful backbone) that would potentially improve our method as well, but are out of scope of this paper. The innovations of our paper to convert a multi-stage (one network per object and bounding box detector) approach into a single-stage (one network for all objects without need for a bounding box detector) approach could be applied analogously to their method.
\begin{table}[h!]
\setlength\tabcolsep{2.6 pt}
\small
\begin{center}
\begin{tabular}{l c c c c c }
\toprule
{Arch.} & DNN & $AR$ & $AR_{MSPD}$ & $AR_{MSSD}$ & $AR_{VSD}$ \\
\midrule
EPOS \cite{Hodan2020, Hodan2021} & 1/set & 54.7 & 75.0 & 50.1 & 38.9 \\ \midrule
CASAPose\textsubscript{8} & 1/set & 54.2 & 74.3 & 49.4 & 39.0 \\
CASAPose\textsubscript{8*} & 1/set& 55.4 & 75.3 & 50.8 & 40.2 \\ \midrule \midrule
CASAPose\textsubscript{2x4} & 2/set & 57.4 & 77.1 & 52.9 & 42.1\\
\bottomrule
\end{tabular}
\vspace{2mm}
\caption{\label{tab_bop} Comparison of different variants of our method using the BOP benchmark on LM-O with EPOS \cite{Hodan2020} in BOP configuration \cite{Bop2020, Hodan2021}. CASAPose\textsubscript{2x4} is the \textit{'4 Obj. 2x'} result from Table \ref{tab:capacity} of the main paper using 2 notworks, each for 4 objects.}
\end{center}
\end{table}
\subsection{Ablation Study: Guided Decoder}
We tested different versions of the semantically guided decoder for the 13-object configuration trained with \textit{DKR} (Table \ref{tab:ablation_guided}). The first variant \textit{C/GU} uses only guided upsampling and no guided convolution. Compared with CLADE (\textit{C}) alone, this does not bring an improvement, since the increased accuracy during upsampling is cancelled out by the following regular convolution, which does not take the masks into account. Adding guided convolutions in the first 3 or 4 of 5 decoder blocks (\textit{C/GCU3}, \textit{C/GCU4}) improves the average \textbf{2DP} and the average \textbf{ADD/S}. Between \textit{C/GCU3} and \textit{C/GCU4} no clear difference is visible. Comparing the final model \textit{C/GCU5} (with guided convolutions in all decoder blocks) with \textit{C/GCU4}, the \textbf{2DP} decreases by 0.6\% on average, while an increase of 4.1\% of \textbf{ADD/S} outweighs this. This makes this architecture the best among the tested ones.
\subsection{Influence of Keypoint Regression}
Table \ref{tab:pose_accuracy} compares different variants of the calculation of 2D keypoint positions. $LS_{1stComp.}$ is the variant used in our final model. It applies \textit{DKR} on the largest connected component of each object class and clearly outperforms RANSAC voting ($PV_{RANSAC}$) \cite{Peng2019} used with the same trained model. Interestingly, if a network learns to estimate confidence maps with \textit{DKR} during training, also the results of the RANSAC voting improve ($PV_{RANSAC}$ compared with $PV_{RANSAC*}$). This suggests that least squares optimisation over all vectors in a region during training also improves the global accuracy of the estimated vectors.
Applying \textit{DKR} on a complete mask without connected component filtering ($LS_{All}$) deteriorates the performance, indicating that potential clutter in the estimated semantic masks should be removed before calculating the 2D positions. We tested \textit{DKR} on the second largest connected component $LS_{2ndComp.}$ and see that it nearly never leads to a correct pose. So, at least for the case where only one object per class is visible, using only the largest connected components is very suitable. A proposal for adaptation to multi-instance scenarios is given in the main paper.
\begin{table}
\small
\parbox{.5\linewidth}{
\centering
\begin{tabular}{l c c c c}\toprule
{} & \multicolumn{2}{c}{LM-O}& \multicolumn{2}{c}{LM}
\\\cmidrule(lr){2-3}\cmidrule(lr){4-5}
{} & 2DP & ADD/S& 2DP & ADD/S\\\midrule
C & 51.4 & 28.9 & 93.6 & 64.7 \\
C/GU & 50.7 & 30.0 & 93.9 & 62.9 \\
C/GCU3 & 52.0 & 32.1 & 93.7 & 65.5 \\
C/GCU4 & \textbf{52.7} & 31.9 & 93.5 & 64.9 \\
\textbf{C/GCU5} & 51.5 & \textbf{32.7} & \textbf{93.8} & \textbf{68.1} \\\bottomrule
\end{tabular}
\vspace{2mm}
\caption{\label{tab:ablation_guided} Comparison of different versions of the semantically guided decoder using the 13-object model with \textit{DKR}.}
}
\hfill
\parbox{.45\linewidth}{
\centering
\begin{tabular}{l c c }\toprule
{} & 2DP & ADD/S \\
\toprule
$PV_{RANSAC*}$ & 49.2 & 26.7\\
$PV_{RANSAC}$ & 50.4 & 30.8\\\midrule
$LS_{All}$ & 45.3 & 29.7\\
$LS_{2nd Comp.}$ & 7e-3 & 2e-3\\
$LS_{1stComp.}$ & \textbf{51.5} & \textbf{32.7}\\
\bottomrule
\end{tabular}
\vspace{2mm}
\caption{\label{tab:pose_accuracy}Comparison of different variants of 2D keypoint calculation using the 13-object model (\textit{C/GCU5}) on LM-O.}
}
\end{table}
\section{Additional Details}
\label{sec:additional_details}
\subsection{Differentiable Keypoint Regression}
The Differentiable Keypoint Regression uses a weighted Least Squares intersection of lines calculation, incorporating confidence scores as weights, as it is described e.g.~in \cite{Traa2013}. One system is constructed per keypoint per object. All systems are solved in parallel using \texttt{tf.linalg.pinv} to calculate the Moore-Penrose pseudo-inverse. \textit{DKR} uses the \textit{softplus} function to translate from the network output to the weights of the Least Squares calculations. It is a smooth approximation of the ReLU function that constraints the output of the network to be non negative. Compared with \textit{sigmoid}, it allows to predict weights greater than 1. During training, we add a regularisation term to avoid drift of the Least Squares weights for \textit{DKR} towards zero or infinity. The mean value in the foreground regions of each output map is $\ell1$ regularised to be close to a constant value of 0.7.
\subsection{Hyperparameter Choices}
The losses $\mathcal{L}_{Seg}$, $\mathcal{L}_{Vec}$, $\mathcal{L}_{PV}$, and $\mathcal{L}_{Key}$ are weighted with the factors $\lambda_{1-4}$. Previous work weighted $\mathcal{L}_{Seg}$ and $\mathcal{L}_{Vec}$ equally ($\lambda_1=\lambda_2=1.0$) \cite{Peng2019}, or additionally added $\mathcal{L}_{PV}$ as a regularizer with much smaller weight \cite{Yu2020}. In tests without \textit{DKR} and $\mathcal{L}_{Key}$, we determined $0.015$ as a suitable choice for $\lambda_3$, This is larger than the recommendation of \cite{Yu2020}, but leads to stable convergence in our case. A reduction of the influence of $\mathcal{L}_{Vec}$ ($\lambda_2=0.5$) after including $\mathcal{L}_{Key}$ preserves the balance between segmentation ($\lambda_1$) and vector field prediction ($\lambda_{2-4}$). In summary, the weights were $\lambda_1 = 1.0$ , $\lambda_2 = 0.5$, $\lambda_3 = 0.015$ and $\lambda_4 = 0.007$.
As reported in Section \ref{sec:experiment_bop}, we also trained a full model using $\lambda_4=0.01$ and observed a slight accuracy increase for LM-O (8 objects). However the 13-object model (LM) did not converge as good in this setting.
\subsection{Further Details}
\begin{enumerate}
\item The Farthest Point Sampling (FPS) algorithm is used to calculate the 3D locations of the keypoints \cite{Peng2019}. The keypoint set is initialised by adding the object centre. The 3D models of HB originate from a different 3D scan and have their origin in a different location than the models of LM. We aligned them with the Iterative Closest Point (ICP) algorithm and calculate a fixed compensation transformation for each model. It is applied to the 3D keypoints to make a comparison with HB's ground truth.
\item Pose estimation uses OpenCV's \texttt{cv::solvePnPRansac} with EPnP \cite{Lepetit09} followed by a call of \texttt{cv::solvePnP} with \texttt{SOLVEPNP\_ITERATIVE} and the previous pose as \texttt{ExtrinsicGuess}.
\item During training, we use scenes $0$-$48$ from the synthetic \textit{pbr} LINEMOD images from \cite{Hodan2021} resulting in 49000 training images. Scene $49$ (1000 images) is kept for testing and is used in the ablation study.
\item The experiments were conducted using Tensorflow 2.9 and use ADAM optimiser \cite{Kingma2014}. Our custom layers use Tensorflow's \texttt{\@tf.function(jit\_compile=True)} for acceleration.
\end{enumerate}
\section{Related Work}
\label{sec:related work}
\paragraph{Multi-Object 6D Pose Estimation}
Approaches for 6D pose estimation with CNNs usually either regresses the object pose directly \cite{Xiang2018,Billings2019,Wang2020}, describe the object's appearance with a latent space code to compare it to pre-generated codes \cite{Sundermeyer2019,Wen2020, Park2020}, or regresses the position of 2D projections of 3D points and calculate the pose with a Perspective-n-Points (PnP) algorithm. Approaches from last category either predict object specific keypoints \cite{Hu2019,Peng2019,Song2020,Tremblay2018,Rad2017} or dense correspondence/coordinate maps \cite{Zakharov2020,Park2019_2,Hou2020,Thalhammer2021,Zhigang2019,Hodan2020}.
To deal with multiple objects, most approaches train a separate network per object and need multiple inferences per image \cite{Peng2019,Song2020,Tremblay2018,Zakharov2020,Park2019_2,Zhigang2019,Sundermeyer2019}. Alternatively, increasing the number of output maps is proposed as multi-object extension \cite{Peng2019,Zakharov2020,Rad2017,Hodan2020}, which risks serious accuracy drops \cite{Sock2020} or complex and slow processing \cite{Hodan2020}. Each added object contributes multiple extra output channels having the same size as the input image, which requires significant GPU memory and complicates training \cite{Gard2022}. Sock \emph{et al}\bmvaOneDot \cite{Sock2020} add additional weights to optimise a CNN for multiple objects and reduce the multi-object performance gap. Still, they need the object class as input from a separate bounding box detector to perform their re-parametrisation and one inference for every visible object. This principle is also applied by other multi-stage methods using object specific networks per detection \cite{Zhigang2019,Billings2019,Park2019_2, Sundermeyer2019, Zhang2021, Yang2021}.
Other single-stage - multi-object strategies have rarely been discussed. The category level approach by Hou \emph{et al}\bmvaOneDot \cite{Hou2020} unifies features from different instances of one class. It requires approximately similar geometric structure per category and aligned models during training.
Similar to us, two recent works \cite{Thalhammer2021, Aing2021} make single-stage multi-object pose estimation more performant by using a patch-based approach on a specialised feature pyramid network \cite{Thalhammer2021}, or by predicting an error mask to filter faulty pixels near silhouettes \cite{Aing2021}. Especially for multi-object approaches, the use of synthetic training data is of high importance due difficult creation of real datasets \cite{Thalhammer2021, Hodan2020}. It offers difficulties due to the domain gap \cite{Wang2020, Yang2021, Zhang2021}, which we narrow down by giving the network access to the silhouettes, a nearly domain invariant feature \cite{Wen2020, Billings2019}.
\paragraph{Conditional Normalisation} \quad
Normalisation layers in CNNs speed-up the training and improve the accuracy \cite{Ioffe2015}. A learnable affine transformation recentres and rescales the normalised features. In the unconditional case, the normalisation does not depend on external data \cite{Ioffe2015, Ulyanov2016}. Conditional Instance Normalisation (CIN) \cite{Dumoulin2017} increases the capacity of a CNN by learning multiple sets of normalisation parameters for different classes, e.g.~for neural style transfer \cite{Gatys2015}.
Sock \emph{et al}\bmvaOneDot apply CIN on multi-object 6D pose estimation \cite{Sock2020}, but require object identity as input and handle only one identity at a time. Spatially-adaptive instance (de)norma\-lisation (SPADE) \cite{Park2019} uses per-pixel normalisation parameters depending on semantic segmentation and a pixel’s position.
Tan \emph{et al}\bmvaOneDot \cite{Tan2020} reduce computational and parameter overhead of SPADE by prioritising semantic over spatial awareness. In their class-adaptive instance (de)norma\-lisation (CLADE), a guided sampling operation selects the set of de-normalisation parameters based on the semantic class of a pixel. Similar to \cite{Gard2022}, we extend \cite{Sock2020} by first estimating a semantic segmentation and then infusing the CIN parameters on a local per-pixel level with a CLADE layer in a single pass.
\paragraph{Content-aware Transformations}
While the spatial invariance of convolutions is beneficial for most computer vision tasks, sometimes local awareness of a filter can be helpful. CoordConv \cite{Liu2018} demonstrates the benefit of giving a filter access to its position. It has been used for panoptic segmentation \cite{Sofiiuk2019,Wang2019} and semantic image synthesis \cite{Tan2020}. Other approaches use binary maps to mask out regions of the feature map, e.g.~for inpainting, depth upsampling, or padding \cite{Liu2018_3,Uhrig2017,Liu2018_2}. Guided convolutions have been extended for non-binary annotations \cite{Yu2019} and adapted for multi-class image synthesis \cite{Dundar2020}.
Mazzini \emph{et al}\bmvaOneDot \cite{Mazzini2018} point out that spatially invariant operations for feature-map upsampling fail to capture the semantic information required by dense prediction tasks. Guided operations make upsampling learnable \cite{Wang2019_2,Mazzini2018}. The content-aware upscaling by Dundar \emph{et al}\bmvaOneDot \cite{Dundar2020} takes advantage of a higher resolution mask to keep features aligned with the instance segmentation.
We apply guided convolution and upsampling \cite{Dundar2020} in our segmentation-aware decoder to force the network to focus on the object region when inferring the keypoint locations. This strengthens the local influence of the CLADE parameters inside the respective segmented region.
\section{Multi-Object Pose Estimation}
\label{sec:method}
CASAPose is an encoder-decoder structure with two decoders applied successively (Fig.~\ref{fig:strcuture}).
The first estimates a segmentation mask that guides the second in estimating vectors pointing to 2D projections of predefined 3D object keypoints \cite{Peng2019}. For each pixel, the set of keypoints corresponds to its semantic label. The second decoder additionally outputs confidences from which we calculate the common intersection points with weighted least squares (LS). The differentiable operation allows direct optimisation of the intersection point in the loss function.
\subsection{Object-adaptive Local Weights}
\label{sec:clade}
The injection of object-specific weights is intended to enhance the capacity of a pose estimation network for multiple objects. The process is therefore divided into two subtasks. First, a semantic segmentation $m \in \mathbb{L}^{H\times W}$ is estimated to specify each pixel's object class, out of a set $\mathbb{L}$ with ${N_c}$ class indices, in an image of size $H\times W$. Second, a semantic image synthesis, conditioned on the semantic segmentation, generates a set of vector fields specifying 2D-3D correspondences of object keypoints to calculate poses from.
After a convolution, a conditional normalisation layer normalises each channel $k$ out of $N_k$ channels of features $x$ with its mean $\mu_k$ and standard deviation $\sigma_k$. The result is modulated with a learned scale $\gamma_k$ and shift $\beta_k$ depending on a condition $l = 1, \dots , N_c$
\begin{equation}
x^{out}_k = \gamma^{l}_k \frac{x_k - \mu_k}{\sigma_k} + \beta^{l}_k
\end{equation}
To reach semantic-awareness, we follow the idea of class-adaptive (de)nor\-ma\-lisation (CLADE) \cite{Tan2020} and make the modulation parameters $\gamma$ and $\beta$ functions of the input segmentation. Thereby, a set $\Gamma=(\gamma^1_k, \dots \gamma^{N_c}_k)$ of scale, and a set $B=(\beta^1_k, \dots \beta^{N_c}_k)$ of shift parameters is learned. The intermediate semantic segmentation is used by Guided Sampling \cite{Tan2020} to convert the sets to dense modulation tensors. Corresponding to the segmentation map, the modulation tensors are filled with the respective parameters, as shown in Fig.~\ref{fig:clade}.
Guided Sampling uses a discrete index to select a specific row from a matrix of de-normalisation parameters. The required $argmax$ operation over the estimated class probabilities sacrifices differentiability, required for end-to-end training. To keep it, we directly use the segmentation input $S = (s_{x,y,l}) \in \mathbb{R}^{H\times W\times N_c}, s \in (0,1)$. The parameter sets are the matrices $\Gamma = (\gamma_{l,k})\in \mathbb{R}^{N_c \times N_k}$ and $B =(\beta_{l,k})\in \mathbb{R}^{N_c \times N_k}$, and the dense modulation tensors are the scalar product over the last dimension of $S$ and the parameter matrices $\Gamma$ and $B$.
\begin{equation}
\bar{\gamma}_{x,y,k} = \sum_{l=0}^{N_c} s_{x,y,l} \gamma_{l,k} ,\quad \bar{\beta}_{x,y,k} = \sum_{l=0}^{N_c} s_{x,y,l} \beta_{l,k}
\end{equation}
Since this operation should simulate a discrete selection, the predicted label $s_{x,y,l}$ must be either close to 1 or 0. The raw intermediate segmentation $\hat{S} = (\hat{s}_{x,y,l})$ is normalised with a softmax function scaled with a temperature parameter $\tau$ to push all but one value close to 0.
\begin{equation}
s_{x,y,l} = softmax(\tau \hat{s}_{x,y,l})
\end{equation}
\begin{figure}[t!]
\centering
\subfigure[CLADE using Guided Sampling.]{\label{fig:clade}\includegraphics[height=.352\linewidth]{fig2a_clade}}
\quad \quad \quad
\subfigure[Guided convolution and upsampling.]{\label{fig:guided}\includegraphics[height=.33\linewidth]{fig2b_guided}}
\caption{a) The segmentation is used to select modulation parameters from the weight matrices $\Gamma$ and $B$. b) Guided operations improve the alignment between features and mask.}
\end{figure}
\subsection{Semantically Guided Decoder}
\label{sec:guided}
Our goal is to synthesise keypoint vector fields, which are coherent and locally limited for each object. Based on the object a pixel belongs to, the vector should point to a particular location on that object. However, of course, occluding objects should not influence each other decreasing the accuracy in overlapping regions. We use pixel-wise object-specific weights to discriminate the objects from each other, and identify two challenges for the decoder.
\begin{enumerate}
\item Due to the spatial invariance of convolution, at object boundaries the decoder has no information which parameters were used to normalise a position in the previous block
\item A nearest neighbour (NN) upsampling after CLADE does not result in a feature map that perfectly matches the higher resolution semantic map in the next block.
\end{enumerate}
Inspired by Dundar \emph{et al}\bmvaOneDot \cite{Dundar2020}, we add segmentation-awareness to convolution and upsampling (Fig.~\ref{fig:guided}). The \textbf{object-aware convolution} ensures that the result of a convolution for an object only depends on feature values belonging to it. A mask $M$ filters weights $W$ for every image patch. The mask defines which locations contribute to the feature values and depends on the semantic segmentation $S$. To preserve differentiability, we avoid $argmax$ and do not use a hard binary mask.
\begin{equation}
\label{equ:guided}
m_{x,y}(i,j) = \begin{cases}
\bar{s}_{x+i,y+j} & \parbox[t]{4.0cm}{ \text{if} \quad $s_{x+i,y+j,l} = \bar{s}_{x+i,y+j}$ \text{with} \quad $\{ l | s_{x,y,l} = \bar{s}_{x,y} \}$ }\\
0& \text{otherwise}
\end{cases}
\end{equation}
In Equ.~\ref{equ:guided}, $\bar{s}$ is the maximum value along the class dimension of $S$. The indices $x,y$ correspond to the filter location in the image, $i,j$ to the position in the filter.
We apply the object-aware convolution with a $3\times3$ filter at every location by element-wise multiplication of the input features $X$ with mask $M$ before filtering, followed by normalisation.
\begin{equation}
x' = W^T (X \odot M ) \frac{9}{sum(M)}
\end{equation}
The \textbf{object-aware upsampling} layer enlarges the feature map without loosing the alignment with $S$. Initially, $S$ is NN down-sampled $n$ times ($n=3$) to be used with features in different resolutions. The segmentation in the next higher and current resolution, $S^u$ and $S^d$, guide the upsampling from low to high resolution features, $F^d$ and $F^u$ \cite{Dundar2020}. This preserves the spatial layout in the feature map after the object-aware convolution and CLADE. We do not apply hole-filling, and select the first feature in the $2\times2$ windows for unknown locations.
\subsection{Differentiable Keypoint Regression}
\label{sec:keypoint}
The local processing of the latent map adds semantic-awareness to process different regions independently. We use this property to avoid the non-differentiable RANSAC estimation, commonly used to get 2D points from vector fields \cite{Peng2019}.
While exact intersection points can only be calculated for line pairs, a least squares solution can approximate them from multiple vectors \cite{Traa2013}. Each vector inside a semantic region adds one equation to a corresponding linear system. Solving with the Moore-Penrose pseudoinverse guarantees a unique solution. An estimated per-pixel confidence captures the probability that a vector points in the right direction. Weighting equations with the estimated confidence reduces the susceptibility to noise \cite{Traa2013}. An additional loss minimises the euclidian distance between the calculated 2D coordinate and ground truth. By this, the network learns in which regions the most accurate vectors are predicted and boosts their weighting in the calculation. We observe a focus on nearby regions as well as the object contour (Fig.~\ref{fix:example}). A custom layer calculates the intersection points in parallel on the GPU, so that the network outputs the 2D locations directly. The confidence maps increase the number of outputs by the number of keypoints per object.
During inference, we cluster the semantic maps in connected components and solve the system for the largest component per class. By this, potential misdetections in the semantic map can be filtered out. This works very well, if only one instance of each object is visible per image. For an extension to instance segmentation the semantic segmentation encoder would have to be replaced by an instance segmentation decoder, e.g.~\cite{Cheng2020}. Since the components from Section \ref{sec:guided} and \ref{sec:clade} can also be applied to instance masks, all their advantages remain.
\subsection{Merged Network Outputs}
The object-specific weights in the CLADE layers serve as a key in the decoder to decipher the encoded weights in a characteristic manner. The local processing by the object-aware operations minimises performance degradation and crosstalk between objects. The loss function is applied on the fused output directly. The number of channels of the keypoint decoder is constant and only one channel is added per object in the segmentation decoder. For each keypoint, we compute a confidence map and a 2D vector field. This results in $3m+n+1$ output channels for $n$ objects with $m$ keypoints, e.g.~$41$ for $13$ objects (and 9 keypoints), compared to $365$ outputs for \cite{Peng2019} or $3392$ for \cite{Hodan2020}. The constant number of outputs reduces the GPU memory footprint and quickens inference and training.
\section{Implementation Details}
\label{sec:details}
\paragraph{Architecture}
In CASAPose, a shared ResNet-18 \cite{He2016} provides features for two decoders. The first resembles \cite{Peng2019} and predicts a semantic mask by multiple blocks of consecutive skip connections, convolutions, batch normalisation (BN), leaky ReLU, and upsampling. The keypoint decoder is similar, but replaces BN with CLADE, the regular convolution with an object-aware convolution, and the blind upsampling with object-aware upsampling. Each block takes a scaled semantic mask as additional input to guide the replaced layers. We observed improved convergence if the ground truth segmentation instead of the output of the keypoint decoder is used for supervision. Both decoders can thereby calculate their result in parallel, which results in faster training. The semantic mask, the vector field, and the confidence maps are passed to a keypoint regressing layer (Section \ref{sec:keypoint}). It outputs 9 2D keypoints for each object. Their 3D locations were initially calculated using the farthest point sampling (FPS) algorithm. The pose is estimated with OpenCV's EPnP \cite{Lepetit09} in an RANSAC scheme. Due to its lightweight backbone our network is small and has only $\approx14.8$ million weights. The CLADE layers increase the total number of weights by only 1024 per object.
\vspace{-3pt}
\paragraph{Training Strategy}
During training, all but one of the scenes from the Bop Challenge 2020 \cite{Hodan2021} synthetic LINEMOD dataset are used. The images are rendered nearly photo realistically with physically-based rendering (\textit{pbr}). Object and scene parameters, e.g.~object, camera and illumination position, background texture, and material are randomised. The objects are randomly placed on a flat surface with mutual occlusions. We narrow the domain gap by strong augmentation (contrast, colour, blur, noise). We follow Thalhammer \emph{et al}\bmvaOneDot \cite{Thalhammer2021} but vary the gain for sigmoid contrast \cite{Jung2021} to be $\mathcal{U}(5, 10)$, since the listed values corrupt the image too much. Each network is trained for 100 epochs using the Adam optimiser \cite{Kingma2014} and a batch size of 18 on two NVIDIA A100 GPUs (4 is the maximum for a single Nvidia RTX 2080Ti). The initial learning rate of $0.001$ is halved after 50, 75 and 90 epochs. We use \textit{smooth} $\ell1$ loss $\mathcal{L}_{Vec}$ and Differentiable Proxy Voting Loss (DPVL) \cite{Yu2020} $\mathcal{L}_{PV}$ to learn the unit vectors, and Cross Entropy Loss with Softmax $\mathcal{L}_{Seg}$ the learn the segmentation. Additionally, the keypoint loss $\mathcal{L}_{Key}$ is defined as the \textit{smooth} $\ell1$ of the average Euclidean distance between the estimated keypoints and the ground truth keypoints. The overall loss is
\begin{equation}
\mathcal{L} = \lambda_1 \mathcal{L}_{Seg} + \lambda_2 \mathcal{L}_{Vec} +\lambda_3 \mathcal{L}_{PV} + \lambda_4 \mathcal{L}_{Key}
\end{equation} with $\lambda_1 = 1.0$ , $\lambda_2 = 0.5$, $\lambda_3 = 0.015$ and $\lambda_4 = 0.007$. The $\lambda$ values were determined in empirical preliminary studies using the unseen pbr scene.
\vspace{-3pt}
\section{Experiments}
\label{sec:eval}
We use test images and objects from the widely used datasets LINEMOD (LM) \cite{Hinterstoisser2012}, Occluded LINEMOD (LM-O) \cite{Brachmann2014}, and HomebrewedDB (HB) \cite{Kaskman2019}. We avoid real camera images in training because for practical applications capturing and annotation of large multi-object datasets is unrealistically costly. Our network estimates the poses of all detected objects in one pass. We report the standard metrics \textbf{ADD} and \textbf{ADD-S} \cite{Xiang2018} for symmetric (glue and eggbox) objects, and the \textbf{2D projection} \cite{Hinterstoisser2012} (\textbf{2DP}) metric. For \textbf{ADD/S}, the average 3D distance (between ground truth (gt) and transformed vertices) must be smaller then 10\% of the object diameter; for \textbf{P2D} the 2D distance (between projected gt and transformed vertices) must be smaller than 5 pixel, for poses to be considered as correct. We always list the \textbf{recall} of correct poses per object. Additional results can be found in the Appendix.
\subsection{Comparison with the State of the Art}
\label{sec:exp_comp}
\paragraph{{Occluded LINEMOD (LM-O) \cite{Brachmann2014}}}
Table \ref{tab:multi} compares the \textbf{ADD/S} metric on LM-O against other papers which only train on synthetic data or use additional unlabelled images from the test domain \cite{Li2021,Wang2020,Yang2021}. CASAPose achieves 27.8\% better results than the other single-stage approach PyraPose \cite{Thalhammer2021}. In contrast to their finding, favouring patch based approaches over encoder decoder networks, we show that it is possible to achieve better results with a considerably smaller encoder-decoder network. CASAPose performs 3.8\% better than SD-Pose \cite{Li2021}, a state-of-the-art approach using entirely synthetic training data, but we only train a single network for all objects and omit pre-detection and preprocessing.
DAKDN \cite{Zhang2021}, the best weakly supervised approach is outperformed by 6.5\%. Like us, the Top-2-6 approaches \cite{Zhang2021,Li2021,Thalhammer2021,Wang2020,Yang2021} all use physically-based rendered (\textit{pbr}) images in training. Compared to approaches using simpler synthetic data, namely CDPN \cite{Zhigang2019}, DPOD \cite{Zakharov2020} our algorithm achieves far superior results. The result for 'eggbox' is weaker than for the other methods. Its symmetry leads to ambiguities of the 2D keypoint projections during training. Future work might consider adding a differentiable render and a projection, e.g.~edge-based loss, to explicitly account for symmetry. In addition, 'eggbox' is often heavily occluded, and inclusion of real images (e.g.~semi-supervised) could improve its detection rate.
For EPOS \cite{Hodan2020}, another single-stage approach, only the Average Recall (\textbf{AR}) \cite{Hodan2021} metric is listed with a value of 44.3, whereas our presented results correspond to an \textbf{AR} of 54.2.\footnote{The \textbf{AR} metric is usually used in Bop Challenges \cite{Hodan2021}, in which results different from \cite{Hodan2020} were obtained; see Appendix \ref{sec:experiment_bop} for more details.}. \vspace{3pt}
\paragraph{LINEMOD (LM) \cite{Hinterstoisser2012}}
Table \ref{tab:single} compares our result on LM with one network trained for 13 objects to other methods using only synthetic training data. The achieved mean \textbf{ADD/S} of 68.1\% is 1.2\% better than SD-Pose \cite{Li2021} and 7.4\% better than PyraPose \cite{Thalhammer2021}.\vspace{3pt}
\paragraph{HomebrewedDB (HB) \cite{Kaskman2019}}
The second sequence of HB is a benchmark to check whether a method generalises well to a novel domain \cite{Kaskman2019}. It contains three objects from LM, captured with a different camera in a new environment. Our network trained for LM is used without retraining and the result far surpasses the next best method DAKDN \cite{Zhang2021} by 36\% (Table \ref{tab:homebrewed}). By focusing on the object mask and because 2D-3D correspondences are predicted instead of a full 6D pose, our method shows high invariance to new environments and different capture devices. To verify the latter, another version of the same sequence captured with Kinect (also part of HB) is checked. The camera parameters differ more, but the result is almost as good.
\begin{table}[t!]
\small
\setlength\tabcolsep{2 pt}
\begin{center}
\begin{tabular}{l|c|c|cccccccc|r }
\toprule
{Method} & Data & single-st. & Ape & Can & Cat & Drill & Duck & Eggb. & Glue & Hol.& \textbf{Avg.} \\
\midrule
DPOD\cite{Zakharov2020} & syn. & - & 2.3 & 4.0 & 1.2 & 10.5 & 7.2 & 4.4 & 12.9 & 7.5 & 6.3 \\
CDPN\cite{Zhigang2019} & syn. & - & 20.0 & 15.1 & 16.4 & 5.0 & 22.2 & 36.1 & 27.9 & 24.0 & 20.8\\
DSC-PoseNet\cite{Yang2021} & pbr+RGB & - & 9.1 & 21.1 & \textbf{26.0} & 33.5 & 12.2 & 39.4 & 37.0 & 20.4 & 24.8 \\
PyraPose\cite{Thalhammer2021}& pbr & \checkmark & 18.5 & 46.4 & 11.7 & 48.2 & 19.4 & 16.7 & 30.7 & 33.0 & 28.1 \\
Self6D\cite{Wang2020} & pbr+RGBD & - & 13.7 & 43.2 & 18.7 & 32.5 & 14.4 & \textbf{57.8} & 52.3 & 22.0 & 32.1 \\
DAKDN\cite{Zhang2021} & pbr+RGB & - & - & - & - & - & - & - & - & - & 33.7\\
SD-Pose\cite{Li2021} & pbr & - & 21.5 & 56.7 & 17.0 & 44.4 & \textbf{27.6} & 42.8 & 45.2 & 21.6 & 34.6 \\
\midrule
\textbf{CASAPose} & pbr & \checkmark & \textbf{24.3} & \textbf{59.5} & 15.2 & \textbf{57.5} & 26.0 & 14.7 & \textbf{55.4} & \textbf{34.3} & \textbf{35.9} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\label{tab:multi} Comparison of ADD/S-Recall with SoTA approaches on LM-O with synthetic only or weakly supervised training using unlabeled real data.}
\end{table}
\begin{table}[t!]
\small
\setlength\tabcolsep{2 pt}
\begin{center}
\begin{tabular}{lccccccccccccc|r }
\toprule
{Method} & Ape & Bv. & Cam & Can & Cat & Drill & Duck & Eggb. & Glue & Hol. & Iron & Lamp.& Ph.& \textbf{Avg.} \\
\midrule
AAE \cite{Sundermeyer2019} & 4.2 & 22.9 & 32.9 & 37.0 & 18.7 & 24.8 & 5.9 & 81.0 & 46.2 & 18.2 & 35.1 & 61.2 & 36.3 & 32.6\\
MHP\cite{Manhardt2019} & 11.9 & 66.2 & 22.4 & 59.8 & 26.9 & 44.6 & 8.3 & 55.7 & 54.6 & 15.5 & 60.8 & - & 34.4 & 38.8\\
Self6D-LB\cite{Wang2020} & 37.2 & 66.9 & 17.9 & 50.4 & 33.7 & 47.4 & 18.3 & 64.8 & 59.9 & 5.2 & 68.0 & 35.3 & 36.5 & 40.1\\
DPOD\cite{Zakharov2020} & 37.2 & 66.8 & 24.2 & 52.6 & 32.4 & 66.6 & 26.1 & 73.4 & 75.0 & 24.5 & 85.0 & 57.3 & 29.1 & 50.0\\
PyraPose\cite{Thalhammer2021}& 22.8 & 78.6 & 56.2 & 81.9 & 56.2 & 70.2 & 40.4 & 84.4 & 82.4 & \textbf{42.6} & \textbf{86.4} & 62.0 & 59.5 & 63.4\\
SD-Pose\cite{Li2021} & \textbf{54.0} & 76.4 & 50.2 & 81.2 & \textbf{71.0} & 64.2 & \textbf{54.0} & \textbf{93.9} & \textbf{92.6} & 24.0 & 77.0 & 82.6 & 53.7 & 67.3 \\
\midrule
\textbf{CASAPose} & 30.3 & \textbf{94.8} & \textbf{60.0} & \textbf{83.9} & 60.5 & \textbf{89.2} & 37.6 & 71.0 & 80.7 & 30.7 & 84.5 & \textbf{89.9} & \textbf{71.7} & \textbf{68.1} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\label{tab:single} Comparison of ADD/S-Recall with SoTA approaches on LM with synthetic training.}
\end{table}
\begin{table}[t!]
\small
\setlength\tabcolsep{2 pt}
\begin{center}
\begin{tabular}{lccccccc|cc }
\toprule
{Method} & DPOD\cite{Zakharov2020}& PyraP.\cite{Thalhammer2021} & DSC-P.\cite{Yang2021} & Self6D\cite{Wang2020} & DAKDN\cite{Zhang2021} & ~ & \textbf{Ours}~ & ~& Ours\textsuperscript{\textdaggerdbl} \\
\midrule
Avg. & 32.7 & 41.3\textsuperscript{\textdagger} & 44.0 & 59.7\textsuperscript{\textdagger} & 63.8 & ~ &\textbf{86.9}~ & ~& 84.8 \\
\bottomrule
\end{tabular}
\end{center}
\caption{\label{tab:homebrewed} Comparison of ADD/S-Recall on HB. Methods indicated with (\textsuperscript{\textdagger}) retrain with data from the target domain. We list results for the Primesense and Kinect (\textsuperscript{\textdaggerdbl}) sequence.}
\end{table}
\subsection{Ablation Study}
\label{sec:exp_abl}
Table \ref{tab:ablation} shows the effects of different components of our architecture. We train models for 13 objects and evaluate on LM, LM-O and the unseen \textit{pbr} scene \cite{Hodan2021} to see improvements with and without domain gap. The simplest model (\textit{Base}) uses the merged vector field and the second decoder, but otherwise resembles PVNet \cite{Peng2019}. The model is gradually expanded to include CLADE (\textit{C}) and the guided decoder (\textit{C/GCU}).
For each case, we train a network with and without confidence output, i.e. with Differentiable Keypoint Regression (\textit{DKR}) or RANSAC based voting (\textit{RV}). Averaged over the three datasets, adding \textit{C} and \textit{C/GCU} improves \textbf{ADD/S} by 13.6\% and 17.9\% compared to \textit{Base} for the \textit{RV} networks. This demonstrates that extended network capacity with \textit{C} and the enforced local processing with \textit{GCU} improve the quality of the estimated vector fields. Adding \textit{DKR} further improves \textbf{ADD/S} by 16.1\% for \textit{C} and 17.6\% for \textit{C/GCU}, compared to the respective network with \textit{RV}. The total improvement from Base with \textit{RV} to the final network is 38.7\% for \textbf{ADD/S} and 7.14\% for \textbf{2DP}. Adding \textit{DKR} to \textit{Base} also improved \textbf{ADD/S} by 24.5\%, and \textbf{2DP} by 4.3\% showing that its ability to assign low confidences e.g.~at overlapping regions improves already the simplest architecture.
We notice that adding \textit{GCU} to a network using \textit{DKR} is especially effective for the datasets with domain gap and even more if occlusion is present (9.3\% and 7.3\% improvement over \textit{C}+\textit{DKR} compared to 2.7\% improvement for \textit{pbr}). The reason we give for this is that the contour is a cross-domain feature, and access to the silhouette and consequent higher weighting of nearby vectors helps bridging the domain gap. Fig. \ref{fix:example} shows this as well as the sharp separation of the vector fields at object boundaries due to \textit{C/GCU}.
\begin{table}
\setlength\tabcolsep{2.6 pt}
\small
\begin{center}
\begin{tabular}{l c c c c c c c c c c c c}
{Arch.} & \multicolumn{6}{c}{\textit{without keypoint regression (RV)}} & \multicolumn{6}{c}{\textit{with keypoint regression (DKR)}}
\\\cmidrule(lr){2-7}\cmidrule(lr){8-13}
{} & \multicolumn{2}{c}{LM-O}& \multicolumn{2}{c}{LM} & \multicolumn{2}{c}{pbr} & \multicolumn{2}{c}{LM-O}& \multicolumn{2}{c}{LM} & \multicolumn{2}{c}{pbr} \\
{} & 2DP & ADD/S & 2DP & ADD/S & 2DP & ADD/S & 2DP & ADD/S & 2DP & ADD/S& 2DP & ADD/S\\
\midrule
Base & 45.3 & 22.2 & 90.0 & 49.8 & 74.6 & 42.2 & 49.4 & 28.9 & 91.7 & 59.2 & 77.9 & 52.5 \\
C & 47.3 & 26.6 & \textbf{91.8} & 55.7 & 76.5 & 47.5 & 51.4 & 29.9 & 93.6 & 64.7 & 79.3 & 56.1\\
\textbf{C/GCU} & \textbf{49.2} & \textbf{26.7} & 91.4 & \textbf{59.0} & \textbf{77.3} & \textbf{49.0} & \textbf{51.5} & \textbf{32.7} & \textbf{93.8} & \textbf{68.1} & \textbf{79.6} & \textbf{57.6} \\
\bottomrule
\end{tabular}
\vspace{3mm}
\caption{\label{tab:ablation} Ablation study: Comparison of different network architectures on different datasets.}
\end{center}
\vspace{-5mm}
\end{table}
\subsection{Influence of the Object Number}
\label{sec:exp_obj}
The multi-object capacity of CASAPose with respect to the \textbf{2DP} and \textbf{ADD/S} metric is evaluated in Table \ref{tab:capacity}. Poses are estimated for the 8 LM-O objects on
LM and LM-O. The averaged \textbf{ADD/S} for the 8-object network is 8\% better than for the 13-object network with nearly the same \textbf{2DP}. Splitting the objects in tow groups of four (ape-drill, duck-holepuncher) leads to a further increase of both metrics. The two-network solution increases \textbf{ADD/S} to 38.7\%, further extending the distance to the methods in Table \ref{tab:multi}. This increase can be partially explained by the exclusion of symmetrical objects from the first group for which we also observed a stronger enhancement. Future experiments might evaluate the use of a more recent segmentation backbone to further narrow down the observed multi-object gap.
\subsection{Running Time}
\label{sec:exp_perf}
The runtime from image input to final poses for all visible objects in LM-O is 37.3 ms on average with the 8-object model. It splits in 18.8 ms for network inference, 1.7 ms for \textit{DKR}, about 2.9 ms for PnP, and 13.9 ms for finding the largest connected-component (CC) for each class. This results in about 27 frames per second on our test GPU (Nvidia A100). We propose to replace the CC analysis with an instance centre prediction (e.g.~like \cite{Cheng2020}) in future work to add instance awareness to our method and replace the cost intensive processing.
\begin{table}[t]
\setlength\tabcolsep{1.5 pt}
\small
\parbox{.41\linewidth}{
\centering
\begin{tabular}{l c c c c }
\toprule
{} & \multicolumn{2}{c}{LM-O} & \multicolumn{2}{c}{LM}\\
\cmidrule(lr){2-3}\cmidrule(lr){4-5}
{} & P2D & ADD/S & P2D & ADD/S \\\midrule
13 Obj. & 51.5 & 32.7 & 96.0 & 60.4 \\
8 Obj. & 54.7 & 35.9 & 96.9 & 59.2 \\
4 Obj. (2x) & \textbf{58.4} & \textbf{38.7} & \textbf{97.3} & \textbf{66.5}\\
\bottomrule
\end{tabular}
\vspace{2mm}
\caption{\label{tab:capacity} Influence of object number per network on ADD/S-Recall. }
}
\hfill
\parbox{.55\linewidth}{
\centering
\includegraphics[width=\linewidth]{mask_vectors_conf}
\vspace{-5mm}
\captionof{figure}{\label{fix:example} Estimated masks, colour coded vector fields, and confidence maps with \textit{C/GCU} (f.l.t.r). }
}
\end{table}
\section{Conclusion}
We showed that class-adaptiveness and semantic awareness improve the performance of a multi-object 6D pose estimator. Local feature processing minimises interference between overlapping regions in an reduced output space. Object-specific parameters selected via CLADE in a second decoder strengthen the prediction accuracy. The locality of the operations allows region-wise predictions, e.g.~of least square weights, where we demonstrated a reduction of domain gap with our Differentiable Keypoint Regression. This also enables the direct addition of additional steps, such as pose refinement, in an end-to-end solution in future work. The presented layers are general enough to be integrated also in other pose estimation architectures.
|
{
"timestamp": "2022-10-18T02:30:56",
"yymm": "2210",
"arxiv_id": "2210.05318",
"language": "en",
"url": "https://arxiv.org/abs/2210.05318"
}
|
"\\section{Introduction}\\label{sec:introduction}\n\nAmong the Machine Learning algorithms, Deep Neu(...TRUNCATED)
| {"timestamp":"2022-10-12T02:11:50","yymm":"2210","arxiv_id":"2210.05276","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{sect_intro}\nAmong the 5000 stars that are visible with the naked (...TRUNCATED)
| {"timestamp":"2022-10-12T02:12:56","yymm":"2210","arxiv_id":"2210.05312","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nContemporary real-world datasets employed in\nartificial intelligence are (...TRUNCATED)
| {"timestamp":"2022-10-12T02:12:32","yymm":"2210","arxiv_id":"2210.05301","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nImage style transfer has been receiving increasing attention in the creati(...TRUNCATED)
| {"timestamp":"2022-10-12T02:08:44","yymm":"2210","arxiv_id":"2210.05176","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nThis paper concerns the Virasoro constraints on sheaf counting theories. G(...TRUNCATED)
| {"timestamp":"2022-10-12T02:11:25","yymm":"2210","arxiv_id":"2210.05266","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nRecently, energy-harvesting wireless sensor networks (EH-WSN)~\\cite{Adu-M(...TRUNCATED)
| {"timestamp":"2022-10-12T02:13:04","yymm":"2210","arxiv_id":"2210.05316","language":"en","url":"http(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 4