Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 8
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 33042)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 8
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{INTRODUCTION} \label{sec:intro}
In recent years deep neural networks (DNNs) are increasingly used for advanced driver assistance systems (ADAS) which are deployed in public.
Further, DNNs play a key role for full autonomous driving (AD) and enable the most accurate perception, planning, etc. \cite{ad_dnn}.
At the same time, DNNs are quite brittle when the input data does not exactly match the data distribution seen during training.
Small domain shifts or the existence of out-of-domain (OOD) data lead to a significant decrease in performance.
Such shifts or small corruptions of the data occur naturally and are critical for systems deployed in partly unknown environments like ADAS.
Hence, research interest on OOD data started to increase in recent years (\cite{ood_images, ood_objects}).
The decrease in performance on OOD data is further amplified by adversarial attacks \cite{discovery}.
These attacks allow an adversary to fool any machine learning system by generating a small perturbation that is applied on the input data of the system.
Initially, such attacks applied the perturbation directly on the input image \cite{noise_physical}.
More crucial for AD/ADAS are attacks that show the possibility of performing similar attacks in the physical world.
Here, patches or markings are placed in a physical scene (\cite{patch, physical_detector, darts}) and fool the DNN even after the perturbation is captured by the camera system.
It also showed that such physical attacks are possible even if the adversary has only strict black-box access to the system \cite{physical_black_box}, meaning any deployed system could be theoretically attacked.
Hence, all described data distribution types are relevant for the safe deployment of AD/ADAS and need to be dealt with.
However, a standard DNN outputs a high confidence in the predictions on these data distributions, meaning a following planning or control algorithm treats the DNN output as a reliable value.
For a correct follow-up decision, it is required that the output confidence actually reflects the real reliability of the DNN also under data distribution shifts.
Hence, the estimated confidence should align with the true accuracy and data knowledge of the model.
This allows the following algorithms to make an informed decision and not rely on a bad prediction by the DNN.
Based on the confidence estimation the following algorithms can for example contact alternative backup systems to override the DNN prediction or engage safety features, like a speed reduction or passing the control to the (safety) driver.
Especially, the approach to use different expert models for certain situations is popular \cite{ad_fallback} and also used by current ADAS systems operating on public roads.
However, due to strict timing constraints and limited computational resources it is not possible to run the more complex expert models simultaneously.
Therefore, the general DNN must output a meaningful confidence so the controller can decide whether (and which) more complex models are used in the next images of a video stream to generate a specialized understanding of the current scene.
In reality the general DNN might be a combination of different systems from various suppliers, where each is focused on the perception of a certain task, e.g. different systems exist for driving space detection and pedestrian detection.
To still use the aforementioned concept of situational expert systems depending on the confidence it is required that the confidence estimation of each supplied system is equally meaningful.
One way to ensure this, is by using a unique and reliable confidence estimation that is independent of the individual suppliers.
This allows the final manufacturer to assess each system individually.
Therefore, a model agnostic confidence estimation method is required since it is not possible to enforce a certain confidence training method for each supplier, because each has its own pipelines and architectures that cannot be easily adapted to the requirements of each customer.
Additionally, we focus on black-box confidence estimation since systems are typically shared by suppliers in a secret fashion where the customer does not have access or knowledge of the individual architecture, components or gradient flows.
Instead, only the final output of the supplied system is made available and can be used by the customer for further computations and decisions.
Hence, a model agnostic black-box confidence estimation is required to enable the safe usage of systems from different suppliers.
An overview of the different confidence estimation categories and our focus area is shown in \autoref{fig:overview}.
To explore black-box confidence estimation for ADAS we choose a traffic sign recognition (TSR) system as an exemplary ADAS, which is deployed in public by many manufacturers.
At the same time, TSR systems are most comparable to work on the confidence estimation of DNNs in the domain of image classification, where most publications are focused on.
We choose this use case since it allows to compare current advances from general image classification to our proposed black-box confidence estimation, while still being relevant for ADAS development.
This enables us to observe the difference between our black-box method and recent white-box methods.
\textbf{Our contributions:}
\begin{itemize}
\item We motivate the need for strict black-box confidence estimation that is real-time capable for AD/ADAS
\item The neighborhood confidence (NHC) is proposed as a method to perform black-box confidence estimation using limited additional samples
\item A comparison with the most similar online white-box confidence method is performed for small distribution shifts, full OOD data and adversarial attacks
\item Our findings show that we achieve an improved or similar performance in low data regimes while only using the black-box output which is required for the motivated usage in AD/ADAS
\end{itemize}
\section{RELATED WORK}
Most publications regarding the confidence estimation of DNNs use methods that change the underlying architecture or training process.
This includes the training of multiple models to use ensembles during online inference \cite{ensemble} or using different (probabilistic) layers (\cite{evidential, bayesian}) to output a more meaningful probability distribution than the standard softmax layer.
These methods are unsuited for our considered setting because they have to be applied in the training process and cannot be used to determine the reliability of a supplied system independent of the concrete supplier and system.
Alternatively, methods exist that do not require the retraining of a model and are added post hoc for inference.
Here, one method that also relies on ensembling during inference is Monte-Carlo Dropout \cite{mc_dropout} which does not require a change to the DNN architecture if dropout \cite{dropout} is already used during training.
In this approach dropout stays active during inference and allows a Bayesian approximation of the confidence.
Another method that does not require retraining is temperature scaling (\cite{calibration, calibration_bayesian}).
This improves the calibration of a trained DNN by adding a scaling coefficient to the final logit layer.
Furthermore, the combination of a DNN with the k-nearest neighbors algorithm \cite{deep_knn} is proposed to estimate the distance of a test data point to the closest train data points.
Again, the discussed methods are unsuited for the considered setting because they require an adjustment in the supplied system which a customer typically cannot perform itself.
In our considered setting model agnostic methods are needed that are only applied during inference and do not require any change in the architecture or addition of further components \cite{confidence_online}.
Here, the most similar and recently proposed method is called attribution-based confidence (ABC) \cite{attribution} which can be applied to any differentiable model and does not require any changes.
It uses the pixel-wise attribution to generate perturbed data points.
To determine the attribution gradient-based methods are exploited, meaning the ABC requires white-box access to a model.
Hence, using this method in practice requires that supplied systems are shared such that the internal computations are observable.
This contrasts with the setting we focus on, which considers secretly sharing a system and thus requires to estimate the confidence of a black-box model.
Additionally, there exists work specific to the separate data distribution types that we consider in this work.
Methods are proposed to specifically detect whether a data point is OOD \cite{ood_detection} or perturbed by an adversary \cite{adv_detection}.
However, such methods are not the focus of our work since we are interested in a single system that can provide a meaningful confidence estimation under different data distributions.
This is most useful for the scenario motivated in \autoref{sec:intro} because it allows following control algorithms to perform an appropriate action under various different conditions.
Using multiple systems to capture each data distribution type is not possible due to strict requirements on the available computational resources and timing constraints.
\begin{figure}[t]
\centering
\includegraphics[scale=0.23]{images/overview.png}
\caption{High-level categories to group confidence estimation methods}
\label{fig:overview}
\end{figure}
\section{NEIGHBORHOOD CONFIDENCE} \label{sec:nhc}
We first describe the motivation behind our proposed confidence metric.
Then, we formulate an algorithm to compute the basic version of the neighborhood confidence.
Additionally, we introduce other concepts that improve the performance of the NHC further.
\subsection{Motivation}
The basic idea behind the neighborhood confidence is that the classification reliability of a system is higher when a data point lies in the center of a decision region.
At this location the data point is classified as reliable as possible in the associated class, because a small perturbation of the data point does not change the result of the classification.
Therefore, the confidence of the system should be highest at such data points to show the highest reliability of the classification.
Following this line of thought, data points near the decision boundary are classified less reliable and should have a lower confidence.
Here, a small perturbation is sufficient to push a data point in a different decision region, which leads to a change in the classification without a meaningful change in the data itself.
Hence, the confidence of the system should be lower to show that nearly a different class is predicted and the classification is not very reliable.
The described basic concept behind the neighborhood confidence is visualized in \autoref{fig:motivation} for a two dimensional example with three different classes.
Darker colors show a higher value of the classification reliability and consequently an ideal confidence metric should show a similar behavior.
\subsection{Method}
Following the motivation for an ideal confidence metric, a computable black-box metric is needed that reveals how close a decision boundary is to a given data point.
Due to the high dimensionality of DNN based systems and input sensors used in reality an ideal metric can only be roughly approximated when considering the strict requirements for AD/ADAS regarding inference time and available computational resources.
To perform this approximation, we use a method that tests how robust a classification is under the influence of noise.
First, multiple noise perturbations are added on the input data and then the classification is done by the system on all data points.
If the system classifies all perturbed data points as the same class as the unperturbed data point, it shows that the input data point is not near a decision boundary.
Otherwise, the influence of the noise would have pushed some perturbed data points in a different decision region and thus a different class.
Combining the presented ideas, the final neighborhood confidence to assess the classification reliability of a black-box system is the fraction of perturbed data points that is classified as the same class as the original data point.
Hence, for a generic classification system $f(\cdot)$ the neighborhood confidence $\xi$ for a perturbation strength $\lambda$ can be calculated as:
\begin{itemize}
\item Obtain raw data point $x \in \mathbb{R}^D$
\item Draw $N$ noise samples $n_0, \dots , n_{N-1} \in \mathbb{R}^D$ from a random distribution
\item Generate $N$ perturbed data points $x'_0, \dots , x'_{N-1}$ with $x'_i = x + \lambda n_i$
\item Classify all data points $y = [f(x), f(x'_0), \dots, f(x'_{N-1})]$
\item Calculate the NHC as $\xi = \frac{1}{N} \sum_{i=1}^{N} \{y_0 == y_i\}$
\end{itemize}
Using this method, the NHC is valued between zero and one where smaller values indicate that a decision boundary is closer.
This effect is visualized in \autoref{fig:nhc} for a simplified two dimensional example.
It can be observed that the NHC captures whether a data point is located near the boundary of a decision region or further inside that region.
The strength $\lambda$ can be used to adjust the range of the neighborhood sampling to check whether a decision boundary is nearby.
If more classes exist and the decision regions are smaller with decision boundaries closer together $\lambda$ can be decreased to still have a meaningful sampling procedure.
\begin{figure}[t]
\centering
\includegraphics[scale=0.3]{images/motivation.png}
\caption{Simplified visualization of the classification reliability of data points in a decision region for a two dimensional example}
\label{fig:motivation}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[scale=0.325]{images/nhc.png}
\caption{Simplified visualization of the resulting neighborhood confidence $\xi$ with $N=7$ for a two dimensional example}
\label{fig:nhc}
\end{figure}
The described algorithm allows for an efficient calculation of the NHC, since the classification of all data points can be done in parallel.
This is beneficial for the application in environments where strict timing constraints exists, like it is the case for AD/ADAS.
If enough computational resources exist the calculation of the NHC can be done without any relevant overhead, by batching all data points $x, x'_0, \dots, x'_{N-1}$ together and calculating $y$ with a single forward pass.
To exploit this efficiency $N$ must be rather small so that the complete batch fits on the computing device and enough memory is available.
Therefore, we study the impact of the number of used noise samples in \autoref{sec:hyper} in low data regimes.
Furthermore, it is important to note that the presented NHC is model-agnostic and can be calculated without any adaption for black-box systems.
This also holds for hard black-box systems \cite{physical_black_box} where only the top-1 class is output.
No information of the classification system $f(\cdot)$ or any internal gradients is required.
This allows the application to unknown systems from external suppliers which enables the use case presented in \autoref{sec:intro}.
\subsection{Enhancements}
It is possible to enhance the presented basic version of the NHC in different ways depending on the concrete use case.
On one hand, different perturbation strengths $\lambda_1, \dots, \lambda_j$ can be used at the same time instead of only one.
This can be useful if the structure of the decision region is unknown or very uneven.
It allows to gather more insights into the structure of the surrounding decision boundaries.
Another option is to use the introduced method but specify a concrete reference class for $y_0$ instead of using the class $f(x)$ that is predicted by the system on the unperturbed data point.
This allows to estimate the distance to the decision boundary of the specified class which is useful if a potential misclassification as a certain class is more severe for some of the classes.
For instance, in the case of AD/ADAS one wants to ensure that no pedestrian detection is missed.
Hence, the pedestrian class can be chosen as reference class $y_0$, which allows to approximate the distance to this class at any time in addition to calculating the normal NHC using $f(x)$ as reference class.
If the resulting value for $\xi$ is high in the case that the pedestrian class is used as $y_0$ the decision region of the pedestrian class is close and extra care can be taken in subsequent control algorithms.
\section{EXPERIMENTS}
To evaluate the performance of the neighborhood confidence we perform qualitative experiments on different data distributions types.
The performance is compared with the most similar online white-box method ABC \cite{attribution} which mainly follows our considered setting by not requiring any adjustments or additions in the TSR system.
It is interesting to explore whether the usage of the more detailed information in the white-box method achieves a better performance than the proposed black-box method.
Since, we are interested in confidence estimation that is real-time capable we only use the gradient from a single backward pass to estimate the attribution required for the ABC, because this results in the least overhead in inference time.
Using more computational expensive methods like integrated gradients \cite{integrated_gradients} as explored in \cite{attribution} would lead to a significant delay in the time required for inference.
This goes against our goals and hence we restrict to use a single backward pass.
\subsection{Setup}
For the DNN based TSR system we choose a standard ResNet-18 architecture \cite{resnet} that is trained on the German traffic sign recognition benchmark (GTSRB) dataset \cite{gtsrb}.
Training is performed without additional augmentation and the system achieves a standard accuracy of $\approx \SI{99,3}{\percent}$ on the GTSRB final test set.
To evaluate the effect of confidence estimation under a small distribution shift in \autoref{sec:shift} we generate synthetic images of the same traffic sign classes that are used in the GTSRB dataset.
To this end, we take real images of traffic signs and apply various transformations to simulate different environmental conditions.
In this way, we generate \num{500} new samples for each of the \num{43} classes.
The performance on OOD data is evaluated in \autoref{sec:ood} by using the Chinese traffic sign recognition database (TSRD) \cite{tsrd}.
Since, we are interested in exploring the performance on full OOD data we drop all images from the TSRD that have a corresponding class in the GTSRB dataset.
Thereby, a full OOD dataset of traffic signs is generated where no overlap exists with the classes of the training dataset.
Finally, to generate adversarial attacks in \autoref{sec:adv} we use the projected gradient descent (PGD) method \cite{pgdm}.
This is a strong and standard choice to evaluate the impact of adversarial attacks and is also used by others (\cite{evidential, attribution}).
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.495\linewidth}
\centering
\input{images/hyper_acc_thres.tex}
\caption{Accuracy under consideration of confidence thresholds on the synthetic dataset}
\label{fig:hyper_thres}
\end{subfigure}
\begin{subfigure}[t]{0.495\linewidth}
\centering
\input{images/hyper_ood.tex}
\caption{Empirical CDF on the OOD dataset}
\label{fig:hyper_ood}
\end{subfigure}
\caption{Comparison of the NHC for different numbers of noise samples $N$ and strength $\lambda = \num{0.2}$}
\label{fig:hyper}
\end{figure*}
\subsection{Hyperparameter Study} \label{sec:hyper}
The limiting factor of the NHC when used for real-time capable AD/ADAS is the number of used noise samples $N$.
Hence, we visualize the impact of different choices for $N$ in \autoref{fig:hyper} for the strength $\lambda = \num{0.2}$.
However, the behavior for other strengths is very similar.
It shows, that $N$ mainly represents an option to adjust the granularity of the quantization depending on the use case.
This holds for \autoref{fig:hyper_thres} where the standard accuracy of the TSR system is shown in the case that the classification of a data point is disregarded for the calculation of the accuracy when the NHC confidence of this classification is below a threshold.
In the case that the confidence threshold equals zero the displayed value is the standard accuracy over all data points, since every classification has a confidence $\xi \geq 0$.
Thus, no classification is disregarded for the accuracy calculation.
As soon as a confidence threshold greater than zero is used the classifications of data points where the NHC is below this threshold are disregarded and the accuracy is calculated without those classifications.
For example, in \autoref{fig:hyper_thres} one can observe for $N = \num{10}$ that the accuracy is $\approx \SI{95}{\percent}$ when only classifications with a confidence $\xi \geq \num{0.4}$ are taken into account.
In this experiment one expects that the accuracy increases when the classifications with lower confidence are increasingly disregarded.
At the same time, the quantization effect also appears in \autoref{fig:hyper_ood}.
Here, cumulative distribution functions (CDFs) are used to visualize the distribution of the confidence when the classification is performed on data points of unknown classes.
For example, one can observe for $N = \num{2}$ that $\approx \SI{60}{\percent}$ of all classifications have a confidence $\xi \leq \num{0.6}$ or for $N = \num{5}$ and $N = \num{7}$ that $\approx \SI{40}{\percent}$ of all classifications have a confidence $\xi = \num{0}$.
Because we evaluate on full OOD data it is impossible that the system outputs the correct class and thus a low confidence for each classification is ideal.
From the presented results we take that using $N = \num{7}$ achieves a good tradeoff between the quality of the confidence estimation and reduced computational requirements.
Hence, in the following we always use $N = \num{7}$ noise samples for calculating the NHC.
To have a fair comparison we also use the same number of samples for the ABC, since we are interested in a comparison under restricted computational requirements.
Additionally, we analyze the impact of the noise distribution where the samples are drawn from for the NHC.
We find that sampling from a Rademacher distribution consistently provides the best results.
A possible explanation is that this distribution allows the neighborhood sampling to search in every dimension with the maximum available strength $\lambda$ for decision boundaries.
Therefore, for the following results the noise samples are always drawn from a Rademacher distribution.
\subsection{In-Domain Distribution Shift} \label{sec:shift}
The first comparison of our proposed neighborhood confidence with the attribution-based confidence is shown in \autoref{fig:shift}.
Similar to \autoref{fig:hyper_thres}, the standard accuracy is shown when classifications of data points are disregarded for the accuracy calculation if the confidence of a classification is below a threshold.
We use our generated synthetic dataset to examine the performance under a small distribution shift where similar shifts occur naturally when deploying systems for AD/ADAS in only partly known environments.
Also, for the NHC different strengths $\lambda$ are evaluated.
\begin{figure}[t]
\centering
\input{images/results_acc_thres.tex}
\caption{Comparison of the NHC with different strengths $\lambda$ and the ABC based on the accuracy under consideration of confidence thresholds on the synthetic dataset}
\label{fig:shift}
\end{figure}
All shown confidence metrics pass the basic sanity check since the accuracy increases when the confidence threshold increases.
However, all variants of the NHC reach a higher accuracy for higher confidence thresholds.
Also, they show a higher initial increase than the ABC as soon as the confidence threshold is greater than zero.
This shows that a significant fraction of the data points that are classified with $\xi = 0$ are actually misclassifications.
Once these data points are disregarded the accuracy increases notably and keeps climbing monotonously under increased confidence thresholds.
For higher confidence thresholds the NHC also achieves a higher accuracy, meaning less data points with perfect confidence are misclassified than for the ABC.
All in all, using the NHC with $\lambda = \num{0.4}$ leads to the best performance.
\subsection{Out-of-Domain Data} \label{sec:ood}
In \autoref{fig:ood} the second comparison is done by visualizing the distribution of the confidence when classifying data points of unknown classes, similar to \autoref{fig:hyper_ood}.
The OOD dataset is used which consists only of Chinese traffic signs that the TSR system trained on the GTSRB dataset cannot correctly classify.
Hence, the ideal alternative is to have a low confidence for every classification and in the best case the confidence is always zero.
\begin{figure}[t]
\centering
\input{images/results_ood.tex}
\caption{Comparison of the NHC with different strengths $\lambda$ and the ABC based on empirical CDFs on the OOD dataset}
\label{fig:ood}
\end{figure}
A corresponding behavior can be observed for both confidence metrics.
For the NHC it shows that the strength impacts the overall confidence level.
Higher strengths lead to a decreased confidence on most data points which is intuitive.
The previously best version with $\lambda = \num{0.4}$ performs on par with the ABC and is only slightly outperformed using $\lambda = \num{0.5}$.
\subsection{Adversarial Attacks} \label{sec:adv}
Lastly, we compare the impact of an adversary on the ABC and NHC in \autoref{fig:adv}.
The synthetic dataset is used again, and the confidence is evaluated while increasingly severe PGD attacks \cite{pgdm} are performed on the TSR system.
Here, $\epsilon$ denotes the severity of the adversary in terms of $\ell_\infty$ norm and $\epsilon = 0$ means no adversary is present.
Therefore, this is equivalent to the setting of standard classification.
\begin{figure}[b]
\centering
\input{images/results_adv.tex}
\caption{Comparison of the NHC with different strengths $\lambda$ and the ABC under the influence of a PGD adversary}
\label{fig:adv}
\end{figure}
The first observation is that all confidence metrics successfully decrease the confidence as soon as the adversary is introduced.
However, for the lowest strength $\lambda = \num{0.3}$ the confidence begins to significantly increase again once the severity of the adversary is further increased.
In this case, a more severe adversary can reduce the impact of the NHC because the increased severity of the adversarial perturbation pushes the data points further into the decision region of the target class of the adversary.
Using higher strengths for the neighborhood sampling prevents this effect, since the check for decision boundaries is performed at a greater distance.
Another interesting point for observation is the value of the confidence for $\epsilon = 0$.
Here, no adversary exists meaning the resulting values are the mean confidence on the unperturbed synthetic dataset.
Intuitively, the mean confidence decreases when $\lambda$ is increased for the NHC.
However, the value is also rather low for the ABC.
This means that some variants assign a low confidence to most of the classifications.
The general confidence level is sometimes low which harms the ability to correctly distinguish between benign and harmful data points when the difference to the confidence level under influence of an adversary is too low.
In \autoref{sec:dis} we further elaborate on this behavior and the origin.
Similar to \autoref{fig:shift} the NHC with $\lambda = 0.4$ achieves the best results because the tradeoff is optimized when considering the standard mean confidence and the meaningful confidence decrease under the influence of the adversary.
This version can detect if a significant change in the distribution of the data points exists and reflects this change in the confidence.
It performs best (or close to best) on all experiments showing that a single optimal strength value can be selected which allows the efficient usage of the NHC in real applications.
\section{DISCUSSION} \label{sec:dis}
Our experiments show that for the considered low data regimes the additionally available information used in a strong gradient-based white-box method cannot be exploited and provide no benefit over the neighborhood confidence.
Instead, drawing from a Rademacher distribution provides better confidence estimates in most considered cases.
This is promising for the application in AD/ADAS since less complex methods are needed to comply with the strict timing requirements.
Our results in \autoref{sec:adv} show that the general confidence level is rather low for some evaluated variants also on unperturbed data.
The use of the synthetic dataset represents a small in-domain distribution shift that causes the data points to spread more over a decision region and lie closer to a decision boundary.
In some cases the data points also lie in a different decision region since the standard accuracy drops from $\approx \SI{99.3}{\percent}$ on the original GTSRB test dataset to $\approx \SI{92.9}{\percent}$ on our synthetic dataset (see \autoref{fig:hyper_thres} or \autoref{fig:shift}).
The reliability of the classification is reduced which is reflected in all evaluated confidence metrics.
However, for some variants the confidence level on benign data points under this distribution shift is rather low and one might want to increase the confidence gap to actually perturbed and harmful data points that are important to distinguish.
To accomplish this the integration of concepts for calibration \cite{calibration} in online confidence metrics seems promising.
It is interesting to explore the calibration of online metrics for confidence estimation depending on the current data distribution observed in past data points during inference.
\addtolength{\textheight}{-2cm}
Finally, we like to point out that it is in principle possible to combine the NHC with training methods for an improved confidence estimation.
One could explore whether the use of augmentation methods during training, like \mbox{AugMix} \cite{augmix}, has an impact on the confidence estimation.
In our preliminary experiments strong augmentation during training led to larger and more robust decision regions.
This mainly improves the behavior of all evaluated confidence metrics on unperturbed data by increasing the average confidence, while keeping the strong performance on other distribution types.
Similarly, the interaction of the NHC with adversarial training \cite{pgdm} merits a detailed investigation because adversarial training leads to increased and homogeneous decision regions around the training data samples.
\section{CONCLUSION}
We introduce the neighborhood confidence for online black-box confidence estimation of DNNs motivated by searching the neighborhood of a data point for different decision boundaries.
No information of the DNN is required and only the top-1 class output is used, which is the minimum possible output of a DNN.
This allows to use the NHC to assess the classification reliability of externally supplied components.
The performance of the NHC is evaluated for different data distribution types deviating from the training data distribution allowing only strictly limited additional samples for inference, as required for AD/ADAS.
In this low data regime, the NHC performs better or similar to the most comparable method from the literature, even though this attribution-based confidence requires white-box access to the DNN.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2023-02-28T02:24:28",
"yymm": "2302",
"arxiv_id": "2302.13578",
"language": "en",
"url": "https://arxiv.org/abs/2302.13578"
}
|
\section{Introduction}
\vspace{-0.2in}
\begin{figure}[htb]
\centering
\includegraphics[width=1\linewidth]{figs/steering-overview.pdf}
\vspace{-0.4in}
\caption{The measurement-induced steering protocol conceptually consists of (a) passively steering a system (qubit or qutrit) to an arbitrary state via coupling to an ancilla qubit that is exposed to an environment for measurement and simple state reset (i.e., to $\ket{0}$).
A specifically chosen unitary operator $U(J)$, parameterized by an arbitrary coupling strength $J$, acts upon the system-ancilla. By repeatedly applying the unitary and measuring the ancilla, a back-action is induced on the system whereby the average of all readout outcomes steer the system to a desired state.
Instead of averaging the measurement readouts (b) active steering processes the readouts on a classical computer to accelerate the convergence of the system state.
We experimentally realize the protocol on IBM's superconducting quantum computers, such as \textit{ibm\_perth} with the device connectivity graph shown in (c).
To select our system qubit (qutrit), we choose the transmon with the highest anharmonicity. The ancilla qubit is then selected as nearest neighbor given by the device connectivity.
The Bloch sphere (d) shows the results of passive steering a system qubit on \textit{ibm\_perth} to prepare an equal superposition state (shown as yellow dots) where the initial states are arbitrary (shown as black dots).}
\label{fig:overview}
\end{figure}
One of the primary requirements in quantum computing is the ability to prepare an arbitrary quantum state \cite{kakInitializationProblemQuantum1999,divincenzoPhysicalImplementationQuantum2000a}. This requirement is fulfilled by: (1) initializing the quantum computer to a known fiducial state ($\ket{0}^{\otimes n}$) of $n$-qubits, and (2) applying a series of discrete quantum gates to the known state to obtain a desired final state ($\ket{\psi_\oplus} = \mathcal{U}\ket{0}^{\otimes n}$) \cite{kitaevQuantumComputationsAlgorithms1997}.
Initialization of the quantum computer is commonly achieved by waiting for the system to thermalize to the ground state (\textit{passive reset}) -- with the waiting time roughly correlated to $T_1$ coherence times~\cite{rigettiSuperconductingQubitWaveguide2012a, hartyHighFidelityPreparationGates2014a}.
Although the waiting time for qubits to thermalize is feasible for today's contemporary quantum computers, as technology improves and the coherence times of large collection of qubits increases, the waiting time will dominate in comparison to the program duration. Simply letting qubits equilibrate with their environment is not an option. Moreover, the passive reset is not applicable for scenarios when we need to initialize to an arbitrary (non-fiducial) state.
To avoid passively waiting for a qubit reset, recent efforts investigate \textit{active reset} such as through projective measurements~\cite{basilewitschFundamentalBoundsQubit2021a, tornowMinimumQuantumRunTime2022a}.
In reality, a desired state may not be an eigenstate of a measurement operator and thus leads to probabalistic outcomes.
Therefore, when the state of the qubits is collapsed via measurement, single-qubit rotations are applied to correct the state based on the readout outcomes \cite{tornowMinimumQuantumRunTime2022a}.
However, such an approach faces two main challenges: first, measurement itself can be a long and error-prone operation depending on the underlying technology~\cite{johnsonHeraldedStatePreparation2012, risteInitializationMeasurementSuperconducting2012}; and secondly, the correction to the post-measurement state introduces significant overhead as measurement-outcomes need to be classically processed for each qubit.
Moreover, arbitrary state preparation requires carefully calibrating the necessary quantum gates, as well as extreme fine-tuning on large quantum computers to guarantee an appropriate fidelity.
To address the above drawbacks, alternative quantum state initialization methods are needed.
One strategy is to algorithmically transfer entropy from some qubits to others, or outside the system to an environment -- resulting in a cooling effect \cite{parkHeatBathAlgorithmic2015}.
In the reversible case, unitary quantum gates are applied to cool some qubits while heating up others \cite{boykinAlgorithmicCoolingScalable2002}.
In the irreversible case, heat is transferred to the environment via quantum operations (i.e., with measurement).
These are referred as reversible algorithmic cooling \cite{fernandezAlgorithmicCoolingSpins2004} and heat-bath algorithmic cooling \cite{brassardProspectsLimitationsAlgorithmic2014a, rodriguez-brionesHeatbathAlgorithmicCooling2017}, respectively.
Both methods utilize the properties of entangled states to cool qubits to simple pure quantum states.
However, for real world open quantum systems undergoing non-Markovian dynamics \cite{breuerColloquiumNonMarkovianDynamics2016, whiteDemonstrationNonMarkovianProcess2020a}, a successful state reset implies not only purification, but also erasure of initial correlations between qubits and the environment \cite{reedFastResetSuppressing2010, geerlingsDemonstratingDrivenReset2013, basilewitschBeatingLimitsInitial2017}.
Recent theoretical research investigate new protocols that can successfully perform state reset by analyzing open systems \cite{basilewitschFundamentalBoundsQubit2021a} and exploiting the back-action caused by measuring entangled states.
Previous experimental research have shown feasibility of these methods, through the measured back-action caused by measuring entangled superconducting transmon qubits \cite{groenPartialMeasurementBackactionNonclassical2013, hatridgeQuantumBackActionIndividual2013}.
We take inspiration from passive reset, active reset, as well as quantum computer controllability to investigate a new initial state preparation approach.
Specifically, we use Schr\"odinger's original formulation of quantum steering \cite{schroedingerErfassungQuantengesetzeDurch1929} where a sophisticated experimenter performs suitable measurements on one of the two parts of a bipartite system to drive the other part to a desired state.
The approach reduces the number of qubits that undergo active resets, lowers the classical processing involved during quantum computation for correction, and can prepare arbitrary states $\ket{\psi_\oplus}$ without having to first prepare a known initial state.
Our approach involves delegating ancilla and system qubits (qutrits) that undergo $N$-repetitions of these straight-forward steps:
(1) perform a fixed, entangling quantum circuit $U$ on the ancilla qubits and system qubits;
(2) measure the ancilla qubits and disregard the measurement results;
(3) perform an active reset on ancilla qubits.
After repeating these steps, the state of the system exponentially approaches a desired state. Specifically, this paper makes the following major contributions.
\begin{itemize}
\item While recent work provides the theoretical foundation for \textit{measurement-induced steering of quantum systems}~\cite{royMeasurementinducedSteeringQuantum2020a}, our approach experimentally realizes measurement-induced steering for arbitrary state preparation on physical quantum computers.
\item We develop quantum circuits to implement the measurement-induced quantum steering (MIQS) protocol. We primarily focus on a qubit-qubit coupled system (an ancilla qubit to steer a qubit) and a qubit-qutrit coupled system (an ancilla qubit to steer a qutrit.)
\item We also investigate an \textit{active} approach, where instead of disregarding the measurement results, we take advantage of the measurement readouts to accelerate the convergence.
\item We show that the quantum steering operator can be divided into local and non-local operations using Cartan decomposition~\cite{dalessandroDecompositionsUnitaryEvolutions2006,dalessandroIntroductionQuantumControl2021}. This decomposition can be viewed as a graphical representation for a qubit-qubit coupled system, providing visualization for non-local operations. The non-local operations convey the strength of the entanglement necessary to perform quantum steering.
\end{itemize}
Figure~\ref{fig:overview} conceptually summarizes the quantum steering protocol and shows an over of mapping the protocol to a cloud-accessible quantum computer.
\vspace{-0.2in}
\section{Digital Implementation of MIQS}
\vspace{-0.1in}
The goal of the measurement-induced quantum steering (MIQS) protocol is to prepare a desired target state $\ket{\psi_\oplus}$, irrespective of the initial state.
This is achieved by exploiting the back-action caused by measuring part of an entangled system, steering our system to the target state.
In this section, we first provide the formal specification of the MIQS protocol.
Next, we describe the implementation of the MIQS protocol, focusing on steering a qubit and a qutrit, providing quantum circuits that satisfy the steering conditions.
Finally, we explore the properties of the generated circuits.
\vspace{-0.2in}
\subsection{Formulation of MIQS Protocol}
\vspace{-0.1in}
Suppose we have a system of ancilla qubits initialized to the state $\ket{\psi_A}$ (density matrix $\rho_A$) and system qubits in an arbitrary state $\rho_S$.
The general MIQS protocol involves the following steps:
\begin{enumerate}
\item Couple the ancilla qubits and system qubits with a composite unitary operator $U$. The state of the ancilla-system after the $n$-th application of the unitary evolution is $\rho_{A-S}^{n+1} = U \rho_A \otimes \rho_S^{n} U^\dagger$.
\item {The ancilla qubits are then decoupled from the system, giving the density state of the system as:
\vspace{-0.05in}
\begin{equation}\label{eq:meas-steer}
\rho_S^{n+1} = \mathrm{Tr}_A \left[\rho_{A-S}^{n+1} \right] = \mathrm{Tr}_A \left[ U \rho_A \otimes \rho_S^{n} U^\dagger\right]
\vspace{-0.05in}
\end{equation}
}
\item The ancilla qubits are reinitialized to their initial states and the steps are repeated.
\end{enumerate}
The goal is to steer the system state to a desired target state $\ket{\psi_{\oplus}}$ ($\rho_{\oplus})$. The dynamics of $U$ should be chosen such that the following steering inequality is satisfied:
\vspace{-0.05in}
\begin{equation}\label{eq:steer-ineq}
\bra{\psi_{S\oplus}} \rho_S^{n+1} \ket{\psi_{\oplus}} \ge \bra{\psi_{\oplus}} \rho_S^n \ket{\psi_{\oplus}}.
\vspace{-0.05in}
\end{equation}
In other words, with each repetition of the steps in the MIQS protocol, the state of our system should get closer to our desired pure target state $\ket{\psi_\oplus}$.
The general theory under which Equation~\ref{eq:steer-ineq} will be satisfied is derived in \cite{royMeasurementinducedSteeringQuantum2020a}.
In brevity, if the quantum dynamics is given as the time evolution $U = \exp(-i H \delta t)$ of a Hamiltonian $H$, then for $H$ to satisfy Equation~\ref{eq:steer-ineq} it has the following form
\vspace{-0.05in}
\begin{equation}\label{eq:steer-H}
H = \sum_n \left( O_A^{(n)}\ket{\psi_A}\bra{\psi_A}\right) \otimes \Omega_S^{(n)} + \mathrm{h.c.,}
\vspace{-0.05in}
\end{equation}
where $n$ labels the ancilla qubits.
The Hamiltonian consists of direct product of operators $O_A^{(n)}$ that rotate the ancillas from their initial state to an orthogonal subspace and operators $\Omega_S^{(n)}$ that rotate the system to an orthogonal subspace. Algorithm~\ref{alg:circuit} summarizes the steps to find an operator $U$ that satisfies the steering protocol. Lines 1-4 compute an orthogonal subspace of our target state, $\ket{\psi_\oplus}^\perp$. Lines 5-10 produces the operators $O_A^{(n)}$ and $\Omega_S^{(n)}$, which are used in constructing the Hamiltonian. Line 11 solves the time evolution of the Hamiltonian with some coupling parameter $J$.
\SetKwInput{KwInput}{Input}
\SetKwInput{KwOutput}{Output}
\begin{algorithm}[t]
\DontPrintSemicolon
\KwOutput{Unitary operator $U$}
\KwInput{Desired target state $\ket{\psi_S}$}
\KwInput{Ancilla state: $\ket{\psi_A}$}
\SetKwProg{Fn}{Find $\ket{\psi_S}^\perp$}{:}{\KwRet}
\Fn{}{
Prepare projection operator: $\mathrm{P} = \mathbb{I} - \ket{\psi_S}\bra{\psi_S}$ \\
Define the space: $S = \mathbb{I} - \mathrm{P}$ \\
Solve for the nullspace: $\ket{\psi_S}^\perp = \mathrm{null}(S)$\\
}
\SetKwProg{Fn}{Prepare $\mathcal{U}$}{:}{\KwRet}
\Fn{}{
Find operators that connect to orthogonal spaces\\
\For{$k=1$ \KwTo $\mathrm{dim}(\ket{\psi_S}^\perp)$}
{
$O_D^k = \ket{\psi_D}^\perp \bra{\psi_D}$\\
$\Omega_S^k = \ket{\psi_S} \bra{\psi_S}^{\perp}_k$\\
}
$H = \sum_k O_D^k \ket{\psi_D}\bra{\psi_D} \otimes \Omega_S^k + \mathrm{h.c..}$\\
Solve for $U = \exp(-iJH\delta t)$
}
\textbf{Done}
\caption{MIQS Operator}
\label{alg:circuit}
\end{algorithm}
Rather than physically engineering and realizing a system with the satisfactory Hamiltonian (Equation \ref{eq:steer-H}), we instead \textit{simulate} the Hamiltonian on a quantum computer through the application of discrete unitary operators \cite{lowHamiltonianSimulationQubitization2019a, kokcuFixedDepthHamiltonian2022}.
In the circuit description of quantum computing, a series of discrete unitary operators (gates) transforms the state of a quantum register (a collection of qubits.)
Typically, the quantum gates operate on one or two qubits -- but with a universal gate set and an appropriate circuit, any arbitrary unitary operator can be defined \cite{nielsen2002quantum}. In the next two sections, we investigate the quantum circuits $\mathcal{U}$ (Line 11 in Algorithm~\ref{alg:circuit}) that steer qubits as well as qutrits.
\begin{figure}[htp]
\subfloat[Starting from an unknown initial state $\rho_S$, the system qubit $S$ is steered via a repeated application of an ancilla-system entanglement operation $U_{A-S}$, followed by measurements and active resets of the ancilla qubit $A$. After $N$ applications, the system qubit arrives to a target state $\ket{\psi_{S\oplus}}$. \label{fig:sqrt-swap-circuit}]{%
\includegraphics[width=1\linewidth]{figs/two-steering-sqrt-swap.pdf}%
}\hfill
\subfloat[A simplified quantum circuit representation of the ancilla-system entangling operator $U_{A-S}$ that specifically drives the overall state to $\ket{0}\otimes\ket{+} = \ket{0} \otimes \frac{1}{\sqrt{2}}(\ket{0} + \ket{1})$.\label{fig:two-qubit-decomposition}]{%
\includegraphics[width=1\linewidth]{figs/two-qubit-decompisition.pdf}
}\hfill
\vspace{-0.1in}
\caption{An overview of the quantum steering protocol.}
\vspace{-0.1in}
\end{figure}
\vspace{-0.2in}
\subsection{Implementation of Qubit-Qubit MIQS Protocol}\label{sec:qubit-qubit-miqs}
\vspace{-0.1in}
In this section, we derive the unitary operator $U$ that steers a qubit to a desired state.
An arbitrary target state of a qubit (excluding global phase) has the form
\vspace{-0.05in}
\begin{equation}
\ket{\psi_\oplus} = \cos(\theta/2) \ket{0} + e^{i\phi}\sin(\theta/2)\ket{1},
\vspace{-0.05in}
\end{equation}
with $0 \leq \theta \leq \pi$ and $0 \leq \phi < 2\pi$.
A Hamiltonian that satisfies Equation~\ref{eq:steer-ineq} is
\vspace{-0.05in}
\begin{multline}\label{eq:qubit-H}
H_{A-S} = \frac{J}{2}\left( -\cos(\phi)\cos(\theta)\sigma_A^x\sigma_S^x - \cos(\phi)\sigma_A^y\sigma_S^y
\right.\\
\left. + \sin(\phi)\sigma_A^y\sigma_S^x
+\sin(\theta)\sigma_A^x\sigma_S^z
-\sin(\phi)\cos(\theta)\sigma_A^x\sigma_S^y \right)
\end{multline}
where $J$ is an arbitrary coupling constant, and $\sigma_u^{\{x,y,z\}}$ are the standard Pauli matrices acting on the individual subsystem $u$.
Assuming the standard computational basis, the matrix corresponds to
\vspace{-0.05in}
\begin{equation}
H = \frac{J}{2}\begin{bmatrix}
0 & 0 & \alpha & -\beta_-^*\\
0 & 0 & -\beta_+ & -\alpha \\
\alpha & -\beta_+^* & 0 & 0\\
-\beta_- & -\alpha & 0 & 0\\
\end{bmatrix}
\vspace{-0.05in}
\end{equation}
with $\alpha = \sin\theta$ and $\beta_\pm = e^{i\theta}(\cos\theta \pm 1)$.
A quantum circuit that reproduces the unitary operator
\vspace{-0.05in}
\begin{equation}
U = \exp(-iH)
\vspace{-0.05in}
\end{equation}
will essentially swap the ancilla-qubit space with the system-qubit space.
In Section~\ref{sec:geometry} we provide the optimal quantum circuits that implements the operator with single qubit rotations and CNOT gates.
However, for the remainder of this section we provide an illustrative example with a simple circuit construction.
\noindent \textbf{Example: } A systematic method to construct the quantum circuit is to consider each Pauli string in the Hamiltonian~$H$.
As an example, consider the case when $\phi=0$, then Equation~\ref{eq:qubit-H} simplifies to
\begin{equation}\label{eq:H_steer}
\hat{H}_{A-S}=\frac{J}{2} (-\cos(\theta) \underbrace{\sigma_A^x\sigma_S^x}_{H_{XX}} + \sin(\theta)\underbrace{\sigma_A^x\sigma_S^z}_{H_{XZ}} - \underbrace{\sigma_A^y\sigma_S^y}_{H_{YY}} ).
\end{equation}
Therefore, the unitary evolution operator is given as
\begin{equation}
U_{A-S} = \exp(-i\hat{H}_{A-S}) = U_{XX+XZ} \circ U_{YY} \label{eq:U};
\end{equation}
with two commuting terms
\begin{align}
U_{XX+XZ} & = \exp(i\alpha H_{XX} -i \beta H_{XZ}), \label{eq:U_XX_XZ}\\
U_{YY} &= \exp(i\frac{J}{2}H_{YY}), \label{eq:U_YY}
\end{align}
where $\alpha = \frac{J\cos(\theta)}{2}$ and $\beta = \frac{J\sin(\theta)}{2}$. The circuit decomposition is done in two main steps. First, the non-commuting terms in Equation~\ref{eq:U_XX_XZ} are decomposed using an approximation. Next, all the Pauli Hamiltonians, $H_{XX}$, $H_{XZ}$, and $H_{YY}$, are decomposed to their circuit representations.
A nice simplification occurs when either $\sin(\theta) =0$ or $\cos(\theta)=0$, leaving either $U_{XX}$ or $U_{XZ}$ terms in combination with $U_{YY}$. This specifically occurs when the target state $\ket{\psi_\oplus} = \ket{+} = \frac{1}{\sqrt{2}}\left(\ket{0} + \ket{1}\right)$. With $\theta = \pi/2$, the Hamiltonian in Equation~\ref{eq:H_steer} simplifies to
\begin{equation}
\hat{H} = \frac{J}{2}\left( \sigma_A^x\sigma_S^z - \sigma_A^y\sigma_S^y\right).
\end{equation}
Since the Pauli operators $H_{XZ}$ and $H_{YY}$ commute, we can express the evolution operator as
\begin{equation}\label{eq:U_plus}
U_{A,S} = \exp(-i\frac{J}{2}\sigma_A^x\sigma_S^z)\circ \exp(i\frac{J}{2}\sigma_A^y\sigma_S^y)
\end{equation}
and obtain the quantum circuit as shown in Figure~\ref{fig:two-qubit-decomposition}. The $\ket{+}$ state is particularly interesting due to its prevalence in quantum algorithms, primarily in preparing entangled Bell states by applying a subsequent CNOT operation.
Appendix~\ref{sec:eg-single-qubit} provides an analytical analysis of steering to the $\ket{+}$ state.
\vspace{-0.2in}
\subsection{Implementation of Qubit-Qutrit MIQS Protocol}
\vspace{-0.1in}
In the previous section, we show a derivation of the quantum circuit to steer a qubit to a desired state.
In this section, we derive a quantum circuit to prepare an arbitrary qutrit state.
Control of qutrits is typically harder to do via conventional means compared to qubits, therefore, there is additional benefit to using the MIQS protocol.
An arbitrary qutrit state (excluding global phase) can be written in terms of four parameters as
\begin{align}
\ket{\psi_\oplus} &= \sin(\xi/2)\cos(\theta/2)\ket{0} \nonumber \\
&+ e^{i\phi_{01}}\sin(\xi/2)\sin(\theta/2)\ket{1} \nonumber \\
&+ e^{i\phi_{02}}\cos(\xi/2)\ket{2},
\end{align}
where $0 \leq \theta,\xi \leq \pi$ quantify the magnitude of the components of $\ket{\psi_\oplus}$ while $0 \leq \phi_{01},\phi_{02} \leq 2\pi$ describe the phases of $\ket{0}$ relative to $\ket{1}$ and $\ket{2}$, respectively.
A Hamiltonian that steers the qutrit will have the following form
\vspace{-0.1in}
\begin{equation}
H = \sigma^+ \otimes \ket{\psi_\oplus}\bra{\psi_\oplus}_1^\perp + \sigma^+ \otimes \ket{\psi_\oplus}\bra{\psi_\oplus}_2^\perp + \mathrm{h.c.}
\vspace{-0.1in}
\end{equation}
where $\sigma^+$ is the raising operator and $\ket{\psi_\oplus}_i^\perp$ are orthogonal states to our desired state. We note that we may rewrite the Hamiltonian in terms of $\sigma_x$ and $\sigma_y$ Pauli-matrices and $\lambda_j$ Gell-Mann matrices, with some coupling $\alpha_{i,j}$ between them. Similar to the previous section, we may take the strings consisting of Pauli and Gell-Mann terms and map them to simple building blocks for our quantum circuits.
For our experimental realization of a qutrit state, we will focus on one particular state: an equal superposition as defined by
\vspace{-0.2in}
\begin{equation}
\ket{\psi_\oplus} = \frac{1}{\sqrt{3}}\left(\ket{0} + \ket{1} + \ket{2} \right).
\vspace{-0.05in}
\end{equation}
We may express the orthogonal subspace as being spanned by two vectors
\vspace{-0.05in}
\begin{align}
\ket{\psi_\oplus}^\perp_1 &= \frac{1}{\sqrt{3}}\left(\ket{0} + \nu\ket{1} + \nu^*\ket{2} \right),\\
\ket{\psi_\oplus}^\perp_2 &= \frac{1}{\sqrt{3}}\left(\ket{0} + \nu^*\ket{1} + \nu\ket{2} \right)
\vspace{-0.05in}
\end{align}
where $\nu = \exp(i 2\pi/3)$. Thus, a Hamiltonian that will steer the overall qutrit state to the desired target $\ket{\psi_\oplus}$ has the following matrix form
\begin{equation}\label{eq:qutrit-hamiltonian}
H_{A-S} = \frac{1}{3}\left(\begin{array}{@{}c|c@{}}
\mbox{\normalfont\Large\bfseries 0}_{3 \times 3} &
\begin{matrix}
2 & 2 & 2 \\
-1 & -1 & -1 \\
-1 & -1 & -1
\end{matrix}
\\
\hline
\begin{matrix}
2 & -1 & -1 \\
2 & -1 & -1 \\
2 & -1 & -1
\end{matrix} &
\mbox{\normalfont\Large\bfseries 0}_{3\times 3}
\end{array}\right)
\end{equation}
again showing that overall operation moves both subsystems to their orthogonal subspace.
\vspace{-0.2in}
\subsection{Geometrical Considerations}\label{sec:geometry}
\vspace{-0.1in}
We have derived the quantum circuits that steer qubit and qutrit states to their respective desired states.
The quantum circuits specifically entangle the ancilla and systems states such that they satisfy target state convergence given by Equation~\ref{eq:steer-ineq}.
This section presents the quantum circuits from a geometrical point of view, offering insight to the \textit{kinds} of entanglement necessary.
\newtheorem{definition}{Definition}[section]
The machinery for providing our insight is based on the Cartan decomposition of the $\mathfrak{su}(d_1 d_2)$ Lie algebra, where $d_1 = 2$ and $d_2 = 2,3$ for the qubit or qutrit case, respectively \cite{dalessandroDecompositionsUnitaryEvolutions2006, dalessandroIntroductionQuantumControl2021}.
\begin{definition}\label{def:cartan}
A \textbf{Cartan decomposition} of a Lie algebra $\mathfrak{g}$ is defined as an orthogonal split $\mathfrak{g} = \mathfrak{k} \oplus \mathfrak{m}$ satisfying
\vspace{-0.1in}
\begin{equation}
[\mathfrak{k}, \mathfrak{k}] \subset \mathfrak{k}, \quad [\mathfrak{m}, \mathfrak{m}] \subset \mathfrak{k}, \quad [\mathfrak{k}, \mathfrak{m}] = \mathfrak{m}.
\vspace{-0.05in}
\end{equation}
A \textbf{Cartan subalgebra} denoted by $\mathfrak{a}$ refers to a maximal Abelian algebra within $\mathfrak{m}.$
\end{definition}
\begin{figure}[htp]
\centering
\vspace{-0.5in}
\includegraphics[width=1\linewidth]{figs/weyl.pdf}%
\vspace{-0.4in}
\caption{The Weyl Chamber representing coordinates of non-local two-qubit unitaries.
All possible two-qubit steering operators $U$ are represented by the blue line.
The coordinates are given by the coupling parameter $J$, namely $[J, J, 0]$.
Maximum entanglement is achieved when $J=\pi/2$, corresponding to the point $A_2$ in the chamber.
Individual points correspond to the maximum fidelity achieved when executing the steering protocol with a steering operator given by a choice of $J$. \label{fig:weyl}}
\end{figure}
Picking basis elements one by one, and finding a Cartan decomposition directly through Definition~\ref{def:cartan} is difficult in practice.
Instead, partitioning the Lie algebra into $\mathfrak{k}$ and $\mathfrak{m}$ is done by an involution: a Lie algebra homomorphism $\theta:\mathfrak{g}\to\mathfrak{g}$, such that $\theta(\theta(g)) = g$ for any $g\in\mathfrak{g}$ and preserves all commutators.
The involution is then used to split the Lie algebra by defining subspaces via $\theta(\mathfrak{k}) = \mathfrak{k}$ and $\theta(\mathfrak{m}) = -\mathfrak{m}$.
Cartan's classification revealed that there are only three types of decomposition for $su(n)$. However, we utilize the decomposition given by the corresponding involution $\theta(g) = -g^\mathrm{T}$ for all $g\in\mathfrak{g}$ (referred in literature as an \textbf{AI} type decomposition).
The result of the Cartan decomposition is the ability to write any unitary operator $U$ as
\vspace{-0.05in}
\begin{equation}
U = K_1 A K_2
\vspace{-0.05in}
\end{equation}
where $K_1$ and $K_2$ are elements of $e^{i\mathfrak{k}}$ and $A \in e^{i\mathfrak{a}}$ are elements defined by the Cartan subalgebra.
It is well-known that an arbitrary operator acting on two-qubits $U \in U(4)$ can be decomposed as product of a gate $U\in SU(4)$ and a global phase shift $e^{i\theta}$.
Since the global phase does not impact the underlying quantum mechanics, we focus specifically on the $SU(4)$.
We are particularly interested in the operations that are \textit{non-local}, giving insight to the necessary entanglement.
Such operations are then given as elements in $SU(4)\backslash SU(2)\otimes SU(2)$.
The Cartan decomposition of $\mathfrak{su(4)}$, any two-qubit operation can be written as
\vspace{-0.05in}
\begin{equation}
U = k_1 A k_2
\vspace{-0.05in}
\end{equation}
where $k_1,k_2 \in SU(2) \otimes SU(2)$ and the non-local part $A = exp(i/2 (c_1 \sigma_x\sigma_x + c_2 \sigma_y\sigma_y + c_3 \sigma_z\sigma_z))$.
This representation allows separation of steering operator into local ($K_1$, $K_2$) and nonlocal ($A$) parts.
The coefficients $c_k \in [0, \pi]$ are the non-local coordinates, and contain a geometrical structure \cite{zhangGeometricTheoryNonlocal2003}.
The coefficients for any possible ancilla-qubit steering operator $U(J)$ is given by
\begin{equation}
c = [J, J, 0]
\end{equation}
\begin{figure}[htp]
\centering
\vspace{-0.1in}
\includegraphics[width=1\linewidth]{figs/two-qubit-kak.pdf}
\vspace{-0.1in}
\caption{The optimized decomposition of the qubit-qubit steering operator. $K_i$ gates are single-qubit rotations produced by the Cartan decomposition and are parameterized by $\theta$ and $\phi$ of a desired state. The non-local operator $A$ is decomposed using two CNOT gates and local qubit rotations along X and Z axis. The circuit is further simplified by combining possible local rotations into a single qubit rotation $U_3$ -- a native arbitrary rotation gate on IBM Quantum computers. The $X^{(J/2)}$ and $Z^{(J/2)}$ gates are defined as $e^{i\frac{\pi}{4}J} R_x(\pi J/2)$ and $e^{i\frac{\pi}{4}J} R_z(\pi J/2)$ respectively. $X^{(1/2)}$ gate is then defined as $R_x(\pi/2)$. \label{fig:two-qubit-kak}}
\end{figure}
Figure~\ref{fig:weyl} displays these parameters for any ancilla-qubit steering operator $U$ on the Weyl chamber -- which is the symmetry-reduced version of a cube.
The point $L$ corresponds to the gate CNOT and all gates that are locally equivalent, including the CPHASE gate.
As shown, CNOT and CPHASE gates are not locally equivalent to the steering operator $U$.
Thus, despite being characterized as perfect entanglers, the CNOT and CPHASE gates do not satisfy the steering conditions and in fact are unital operators on the qubit.
Therefore, capability of the steering operator to create entanglement between qubit and ancilla is a necessary but not sufficient condition to steer the qubit.
Digital quantum computers, fortunately, allow for implementation of arbitrary unitary operations that satisfy the non-local criteria. Figure~\ref{fig:two-qubit-kak} is the optimal circuit given by the Cartan decomposition for the ancilla-qubit steering operator which we execute on digital quantum computers.
\vspace{-0.2in}
\subsection{Rapid Reset via Measurement Readouts}
\vspace{-0.1in}
In our current description of the protocol, the results of measuring the ancilla qubits are discarded.
Effectively, by averaging all possibilities of readout outcomes, the state of our system converges to a desired state.
This is advantageous as, in general, classical processing of data is not required avoiding additional overhead.
However, by utilizing the readout results of the ancilla qubits we can accelerate convergence of our system state.
Contemporary quantum computers have the infrastructure to process readout results during the execution of a quantum circuit.
Hence, we take advantage of this capability to demonstrate preparation of a desired state by utilizing readout results.
As a simple demonstration, note in Section~\ref{sec:qubit-qubit-miqs} that the steering operator swaps the detector and system spaces.
Therefore, if the ancilla qubit has swapped to its orthogonal state (a readout of ``1"), that means the system qubit has successfully swapped to the desired state. In general, the measurement of an ancilla qubit with a readout of ``1'' is given by the projection operator
\vspace{-0.05in}
\begin{equation}\label{eq:measurement-proj}
\Pi_1 = \ket{1}_{A}\bra{1}_{A} \otimes \mathbb{I}_{S}.
\vspace{-0.05in}
\end{equation}
The ancilla-system state after applying the steering operator $U$ and measuring the ancilla state in ``1'' is
\vspace{-0.05in}
\begin{equation}
\rho_{A-S}^{n+1} = \frac{\Pi_1 U \rho_{A-S}^{n} U ^\dagger \Pi_1}{p_1}
\vspace{-0.05in}
\end{equation}
where $p_1 = \mathrm{Tr}\left[U\rho_{A-S}^nU^\dagger\Pi_1\right]$ is the probability of measuring a ``1''.
\vspace{-0.2in}
\section{Experiments}
\vspace{-0.1in}
In this section, we describe the different steps followed to physically prepare states via measurement-induced quantum steering (MIQS) protocol with the superconducting transmon qubits and qutrits.
\vspace{-0.2in}
\subsection{Experimental Setup}
\vspace{-0.1in}
The experiments were performed using different IBM Quantum computers (accessed through IBM Cloud~\cite{IBMQuantum}): \textit{ibm\_lima}, \textit{ibm\_belem}, and \textit{ibm\_perth}.
The hardware commands are coded using Qiskit, utilizing the recent additions of mid-circuit measurements and active reset operations. Furthermore, we took advantage of Qiskit Pulse \cite{alexanderQiskitPulseProgramming2020a} -- a pulse-level programming model -- which allowed us to define, calibrate, and execute quantum circuits outside conventional definitions.
The low-level access to the underlying quantum hardware enables processing quantum information on qutrits (three-level system), extending the concept of quantum computation on two-level systems.
For most operations, we used gates calibrated by the IBM team.
For each transmon, the local oscillator (LO) frequency is given by IBM's calibrated $\ket{0} \to \ket{1}$ frequency, which was kept fixed for the experiments.
Transitions between the $\ket{1}$ and $\ket{2}$ states are achieved by using amplitude-modulated microwave pulses via sinusoidal side-band at a frequency $f_{12} - f_{01}$.
This results in an effective shift of frequency for the pulses from $f_{01}$ to $f_{12}$ \cite{krantzQuantumEngineerGuide2019a}.
Appendix~\ref{sec:chip} shows the results of the calibration.
Figure~\ref{fig:superconducting-transmons} represents the energy levels of the superconducting transmons architecture.
\begin{figure}[htp]
\centering
\vspace{-0.2in}
\includegraphics[width=0.8\linewidth]{figs/setup.pdf}
\vspace{-0.2in}
\caption{The schematic of superconducting computers that realizes our qubit-qubit and qubit-qutrit coupling. \label{fig:superconducting-transmons}}
\vspace{-0.1in}
\end{figure}
The MIQS circuits are designed using a combination of: default single-qubit gates, which operate in the $\{\ket{0}, \ket{1}\}$ subspace $(01)$; default entangling CNOT gate; and custom calibrated single-qutrit gates, which operate on the $\{\ket{1}, \ket{2}\}$ subspace $(12)$. The single-qutrits gates are defined by utilizing the amplitude of the $\pi_{1\to2}$ pulse -- which we obtained via a Rabi experiment.
We use the default implementation of the CNOT gate as defined by IBM Quantum. Extended to a qubit-qutrit system, it acts as a $SU(2\times3=6)$ gate with the truth table as shown in Table~\ref{tab:cnot}. For the control qubit in the (01) subspace, it acts as a standard qubit CNOT gate but with an additional phase of $\pi/2$ to the $\ket{2}$ state of the target qutrit \cite{galdaImplementingTernaryDecomposition2021, yurtalanImplementationWalshHadamardGate2020}.
IBM Quantum allows the reuse of qubits through mid-circuit measurements and conditional-reset.
The \emph{reset} is achieved by applying a not-gate conditioned on the measurement outcome of the qubit.
During the execution of the MIQS protocol, the ancilla qubit is measured and subsequently reset.
\begin{table}[htp]
\centering
\vspace{-0.1in}
\begin{tabular}{c c c}
\hline
Control & Target & Output \\
\hline\hline
$\ket{0}$ & $\ket{0}$ & $\ket{00}$\\
$\ket{0}$ & $\ket{1}$ & $\ket{01}$ \\
$\ket{0}$ & $\ket{2}$ & $\ket{02}$ \\
$\ket{1}$ & $\ket{0}$ & $\ket{10}$ \\
$\ket{1}$ & $\ket{1}$ & $\ket{11}$ \\
$\ket{1}$ & $\ket{2}$ & $i\ket{12}$ \\
\end{tabular}
\vspace{-0.1in}
\caption{Truth table for the default IBM CNOT gate where the control qubit acts on a target qutrit. The operation is implemented as two consecutive CNOT gates (more details can be found in Ref. \cite{galdaImplementingTernaryDecomposition2021}).\label{tab:cnot}}
\vspace{-0.1in}
\end{table}
For qubit readout, we used the $0-1$ discriminator provided by IBM Quantum. However, this discriminator is unable to correctly identify excitations to the $\ket{2}$ state, misclassifying them as $\ket{1}$. Therefore, to read out the qutrits, we developed our own custom $0-1-2$ discriminator to classify in-phase and quadrature (IQ) points.
For a desired system state $\ket{\psi_\oplus}$, we construct a batch of MIQS circuits where the total iterations ($N$) of $U_{A,S}$ is incremented from $1$ to a maximum of $\mathcal{N}$. This enables us to estimate the state of the system as the number of $U_{A,S}$ iterations varies, and reduces the overhead due to cloud access to hardware. For each iteration $N$, we conduct quantum state tomography on the system qubit. The measurement results from the quantum computer are processed locally. The estimated state of the system qubit is taken as an unbiased average over all ancilla qubit outcomes (i.e., a projective measurement), and estimates of the mixed system state is computed using maximum likelihood, minimum effort method \cite{smolinEfficientMethodComputing2012}. Once we are content with the results, we fix $N = \mathcal{N}$ which provides one MIQS circuit that faithfully prepares the state $\ket{\psi_\oplus}$. We repeat this process for different coupling parameters $J$, noting the relationship between $J$, numbers of iterations $\mathcal{N}$, and the achieved state $\ket{\psi_\oplus}$ fidelity.
\begin{figure*}[htp]
\includegraphics[width=0.9\linewidth]{figs/lima-vs-belem-vs-perth-err-subplots.pdf}%
\vspace{-0.15in}
\caption{Steering experiment on three IBM Quantum (IBMQ) machines. \label{fig:sqrt-swap-steer-ibmq}}
\vspace{-0.2in}
\end{figure*}
Before executing the MIQS protocol, we further verify the correctness of the steering operator $U_{A,S}$ through quantum process tomography (QPT). QPT is a procedure for experimentally reconstructing a complete description of a noisy quantum channel $\mathcal{E}$. This is done by preparing a set of input states $\{ \ket{a_i} \}$ and performing measurements on a set of operators $\{B_j\}$ to estimate probabilities $p_{ij} =\mathrm{Tr}[B_j^\dagger \mathcal{E}(\ket{a_i}\bra{a_i})].$ If the input states and measurement operators span the input and output spaces respectively, then the set $\{p_{ij}\}$ reconstructs the channel $\mathcal{E}$. For a $n$-qubit channel, the input space is constructed via tensor products of $\{\ket{0}, \ket{1}, \ket{+} = \frac{1}{\sqrt{2}}(\ket{0} + \ket{1}), \ket{+i} = \frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) \}$, and the measurement space via tensor products of $\sigma_x$, $\sigma_y$, and $\sigma_z$. Thus a total of $4^n 3^n$ experiments are conducted to estimate $4^{2n}$ probabilities.
After reconstructing the channel $\mathcal{U_{A-S}}$ through QPT, we extract the error channel by composing with the inverse of the ideal channel $\mathcal{E} = \mathcal{U} \circ \mathcal{U}_{\text{ideal}}^{-1}$. The error channel is converted to the Pauli-transfer matrix representation $\mathcal{R}$, which is strictly real. In the ideal case, $\mathcal{R} = I$, the identity matrix -- representing no errors. The absolute difference between the noisy reconstructed $\mathcal{R}$ and the ideal $|\mathcal{R} - I|$ is shown in Figure~\ref{fig:sqrt-swap-process-tomo}. The average gate fidelity of the reconstructed channels were $F=0.827$, $F=0.877$, and $F = 0.846$ for ibmq\_lima, ibmq\_belem, and ibm\_perth, respectively. While the average gate fidelities are comparable, we can see clear differences in matrix entries in Figure~\ref{fig:sqrt-swap-process-tomo}. Typically, two-qubit gates will have coherent errors due to imperfections in calibration from unwanted terms in the cross-resonance interaction Hamiltonian \cite{sheldonProcedureSystematicallyTuning2016, woodSpecialSessionNoise2020b}.
\begin{figure}[htp]
\vspace{-0.1in}
\includegraphics[width=1\linewidth]{figs/all-sqrt-swap-tomo.pdf}%
\vspace{-0.1in}
\caption{Process Tomography of the steering circuit to prepare $\ket{+}$ on IBM Quantum machines.
Both \textit{ibm\_lima} and \textit{ibm\_belem} are 5-qubit Falcon r4 (year 2020) processors with a quantum volume of 8 and 16, respectively. \textit{ibm\_perth} is a 7-qubit Falcon r.511H (year 2021) processor with a quantum volume of 32. As indicated by the quantum volume benchmark, \textit{ibm\_perth} qubits are expected to have higher stability and lifetime.} \label{fig:sqrt-swap-process-tomo}
\vspace{-0.1in}
\end{figure}
\vspace{-0.2in}
\subsection{Evaluation of Qubit-Qubit Protocol}
\vspace{-0.1in}
We employed the MIQS protocol to prepare 1-qubit stabilizer states.
The stabilizer states serve as a suitable unitary 3-design for the randomized benchmarking protocol.
Stabilizer states can also be defined as the states that are produced by gates from the Clifford group ($H$, $CNOT$, and $S$ gates) applied to $\ket{0}$ state. We express the system-qubit density state as
\vspace{-0.05in}
\begin{equation}
\rho_S(n) = \frac{1}{2}(I + \vec{s}(n) \cdot \vec{\sigma})
\vspace{-0.05in}
\end{equation}
where $\vec{s}(n)$ is a three-component vector that depends on the current iteration $n$ of the steering protocol, and $\vec{\sigma}$ is a vector of the Pauli matrices. The single qubit stabilizers, their vector coordinates $\vec{s}$, and the necessary steering operator $U_{A,S}$ are summarized in Table~\ref{tab:stabilizers}.
\begin{table}[htp]
\centering
\begin{tabular}{c | c c | c |c}
\hline
$\ket{\psi_\oplus}$ & $\theta$ & $\phi$ & $\vec{s}$ & $U_{A,S}$ \\
\hline\hline
$\ket{0}$ & $0$ & $0$ & (0, 0, 1) & $\exp(-i\frac{J}{2}\left(\sigma_A^x\sigma_S^x + \sigma_A^y\sigma_S^y\right))$\\
$\ket{1}$ & $\pi$ & $0$ & (0, 0, -1) & $\exp(-i\frac{J}{2}\left(\sigma_A^x\sigma_S^x - \sigma_A^y\sigma_S^y\right))$\\
$\ket{+}$ & $\frac{\pi}{2}$ & $0$ & (1, 0, 0) & $\exp(-i\frac{J}{2}\left(\sigma_A^x\sigma_S^z - \sigma_A^y\sigma_S^y\right))$\\
$\ket{-}$ & $\frac{\pi}{2}$ & $\pi$ & (-1, 0, 0) & $\exp(-i\frac{J}{2}\left(\sigma_A^x\sigma_S^z + \sigma_A^y\sigma_S^y\right))$\\
$\ket{i}$ & $\frac{\pi}{2}$ & $\frac{\pi}{2}$ & (0, 1, 0) & $\exp(-i\frac{J}{2}\left(\sigma_A^y\sigma_S^x - \sigma_A^x\sigma_S^z\right))$ \\
$\ket{-i}$ & $\frac{\pi}{2}$ & $\frac{3\pi}{2}$ & (0, -1, 0) & $\exp(-i\frac{J}{2}\left(\sigma_A^x\sigma_S^z -\sigma_A^y\sigma_S^x\right) )$
\end{tabular}
\vspace{-0.05in}
\caption{Single qubit stabilizers parameterized by angles $\theta$ and $\phi$ the steering operator $U_{A,S}$ for the MIQS protocol.\label{tab:stabilizers}}
\end{table}
Following Section~\ref{sec:qubit-qubit-miqs}, we develop the quantum circuits for each desired stabilizer state.
We ran the experiment $30$ times, with $1024$ shots each, using quantum process tomography to estimate the density state of the system at each step $n$ of the MIQS protocol.
Figure~\ref{fig:sqrt-swap-steer-ibmq} shows the average result, along with error bars, of running the circuit from Figure~\ref{fig:sqrt-swap-circuit} to prepare $\ket{\psi_\oplus} = \ket{+}$ for $n$ up to 30.
The error bars indicate the decoherence associated with the system qubit.
Namely, for increased $n$, we see an increase in uncertainty of the measured density state.
We then compute the fidelity for all stabilizer states, and find their average.
\begin{figure}[htp]
\vspace{-0.2in}
\subfloat[Average state fidelity between $\rho^n$ and target state. \label{fig:avg-fidelity-plus}]{%
\includegraphics[width=0.5\linewidth]{figs/lima-vs-belem-vs-perth-fid-err.pdf}%
}
\subfloat[Steering inequaility. \label{fig:steering-inequality}]{%
\includegraphics[width=0.5\linewidth]{figs/lima-vs-belem-vs-perth-inequality.pdf}%
}
\vspace{-0.1in}
\caption{Convergence of qubit fidelity throughout the execution of the steering protocol.
(a) Depicts the estimated fidelity across three IBM quantum machines, with the best fidelity being achieved by \textit{ibm\_perth}.
(b) Shows that the steering inequality given by Equation~\ref{eq:steer-ineq} is satisfied.}
\end{figure}
\begin{figure*}[htp]
\centering
\subfloat[Average fidelity of preparing stabilizer states versus the number of repetitions $N$ with different coupling strengths $J$. For certains values of $J$, the fidelity decreases at first before increasing.\label{fig:fid-v-N}]{%
\includegraphics[width=0.35\linewidth]{figs/avg-fidelity-vs-N.pdf}
}
\hspace{0.1in}
\subfloat[Average fidelity of steering to all stabilizer states with different coupling strengths $J$. The number of repetitions of the protocol (vertical dots) is optimally chosen for each $J$. Maximum fidelity of $93 \pm 1\%$ is observed for $J = \pi/2 + \pi/8$. \label{fig:avg-stabilizer}]{%
\includegraphics[width=0.35\linewidth]{figs/lagos-fid-j-vs-n.pdf}
}
\vspace{-0.1in}
\caption{Preparation of qubit stabilizer states with various coupling parameter $J$. The fidelity is given as an average of all stabilizer states. All experiments are performed on \textit{ibm\_perth}.}
\vspace{-0.2in}
\end{figure*}
Figure~\ref{fig:avg-fidelity-plus} shows the average fidelity for all single-qubit stabilizer states.
Furthermore, Figure~\ref{fig:steering-inequality} confirms that the steering inequality (Equation~\ref{eq:steer-ineq}) is satisfied.
The quantum computer ibmq\_perth, achieved the highest overall fidelity and stability.
As noted in Section~\ref{sec:geometry}, the qubit-qubit operator $U_{A,S}$ can be characterized by the coupling parameter $J$.
In theory, the parameter is associated with the strength of entanglement necessary.
To experimentally analyze the role that $J$ plays, we prepare the stabilizer states with varying coupling $J$.
Figure~\ref{fig:fid-v-N} shows the fidelity of preparing the $\ket{+}$ state for varying $J$ on ibmq\_perth.
Although $J=\pi/2$ achieves the fastest convergence, it does not correspond to the highest fidelity.
Figure~\ref{fig:avg-stabilizer} shows the average of steering all the stabilizer states as computed by
\vspace{-0.2in}
\begin{equation}
\mathcal{F} = \frac{1}{6}\sum_{i=1}^6\bra{\psi_i}\rho_i\ket{\psi_i}.
\vspace{-0.1in}
\end{equation}
On average, the fidelity tends to decrease with smaller $J$ values.
Figure~\ref{fig:passive-vs-active-time} takes that average number of repetitions (application of ancilla-system entanglement operation in Figure~\ref{fig:sqrt-swap-circuit}) needed to obtain a fidelity $\mathcal{F} > 0.9$ and compares it against the active steering approach. Note that we end the protocol once the readout of the ancilla is a $1$. Each bar in the figure indicates what percentage of runs lead to the desired fidelity. For example, the leftmost bar shows that the passive quantum steering can reach the desired fidelity 10\% of the time (e.g., out of 100 runs) if we apply the entanglement operation only once ($n$=1 in Figure~\ref{fig:sqrt-swap-circuit}).
\begin{figure}[htp]
\centering
\vspace{-0.1in}
\includegraphics[width=0.9\linewidth]{figs/passive-vs-active-time.pdf}%
\vspace{-0.2in}
\caption{Histogram of protocol repetitions (effort) for preparing stabilizer states with varying steering operators determined by coupling strength $J$.
Passive steering exhibits a Poissonian process, with an exponential decaying count frequency (log scale).
The mean number of repetitions is $\mathcal{N}_{\mathrm{mean}}^{\mathrm{passive}} \approx 3.8$.
The active approach has a $2.5$ times improvement compared to the passive approach with a mean repetition of $\mathcal{N}_{\mathrm{mean}}^{\mathrm{active}}\approx 1.6$.
The cumulative distribution function (CDF) is also shown, further displaying the faster convergence of the active protocol.
\label{fig:passive-vs-active-time}}
\end{figure}
\vspace{-0.2in}
\subsection{Evaluation of Qubit-Qutrit Protocol}
\vspace{-0.1in}
Quantum control beyond the two-level system has been exploited in superconducting quantum processors since the beginning of this technology.
Examples include utilizing the higher levels for qubit readout \cite{martinisRabiOscillationsLarge2002, cooperObservationQuantumOscillations2004, luceroHighFidelityGatesSingle2008}, faster qubit initialization \cite{valenzuelaMicrowaveInducedCoolingSuperconducting2006}, and spin-1 quantum simulation \cite{neeleyEmulationQuantumSpin2009}.
Steps towards ternary quantum computation with superconducting transmon devices have developed in the last 10 years \cite{bianchettiControlTomographyThree2010, abdumalikovElectromagneticallyInducedTransparency2010, abdumalikovjrExperimentalRealizationNonAbelian2013a, jergerContextualityNonlocalitySuperconducting2016, tanTopologicalMaxwellMetal2018, honigl-decrinisMixingCoherentWaves2018, vepsalainenSimulatingSpinChains2020, fedorovImplementationToffoliGate2012}. Recently, these efforts have led to
the implementation of high-fidelity single-qutrit gates
\cite{yurtalanImplementationWalshHadamardGate2020, morvanQutritRandomizedBenchmarking2021}.
Many physical devices, such as superconducting transmons, naturally have higher-energy states which are often ignored to realize qubits. However, controlling the higher-energy states can be tricky, requiring additional techniques to produce a desired evolution. Our goal is to prepare a qutrit in an arbitrary state utilizing an ancilla qubit.
However, controlling qutrits can be a difficult task. There are various factors that need to be calibrated, such as frequency of the drive, amplitidue of the drive, leakage, etc. We believe MIQS can simplify initialization of a qutrit, by coupling it to a qubit.
We demonstrate the protocol by the preparing an equal superposition qutrit state
\vspace{-0.05in}
\begin{equation}
\ket{\psi_\oplus} = \frac{1}{\sqrt{3}}\left(\ket{0} + \ket{1} + \ket{2}\right)
\vspace{-0.05in}
\end{equation}
via a qubit-qutrit operator as defined by Equation~\ref{eq:qutrit-hamiltonian}.
The protocol is repeated $N$ times, where at each step $n$ we perform qutrit quantum state tomography (see Appendix~\ref{sec:qutrit-tomography}).
Figure~\ref{fig:qutrit-fidelity} shows the estimated average fidelity at each step $n$ on ibmq\_perth.
In comparison with the qubit case, the qutrit fidelity has increased error as a result of:
(1) measurement error for classifying the $\ket{2}$ state,
(2) coherence time of the $\ket{2}$ state,
(3) heightened complexity of perform full qutrit state tomography.
\begin{figure}[htp]
\includegraphics[width=0.8\linewidth]{figs/qutrit-fidelity.pdf}%
\vspace{-0.2in}
\caption{Average qutrit state fidelity between $\rho^n$ and the desired target state $\ket{\psi_\oplus} = \frac{1}{\sqrt{3}}\left(\ket{0} + \ket{1} + \ket{2}\right)$. The errors are primarily from inherent measurement error in discriminating the qutrit state, weaker $T_1$ coherence time of the $\ket{1}\to\ket{2}$ subspace, and increased overhead in performing qutrit state tomography. We obtained a state fidelity of $80\pm 9\%$. \label{fig:qutrit-fidelity} }
\end{figure}
\vspace{-0.2in}
\section{Conclusions and Outlooks}
\vspace{-0.1in}
A major challenge in quantum computing is efficiently preparing an initial (arbitrary) state.
We experimentally demonstrate measurement-induced steering on contemporary superconducting quantum computer to prepare arbitrary qubit and qutrit states.
By applying a simple repetition of gates and ancilla measurements, we generate arbitrary qubit states with fidelity $93 \pm 1\%$ and arbitrary qutrit states with fidelity $80\pm 9\%$.
To achieve this, we generate optimal quantum circuits that implement the steering operator, and experimentally reconstruct the density states via quantum state tomography to obtain the fidelity.
We explored the dependence of a tunable parameter that relates fidelity convergence with the number of repetitions of the protocol.
Additionally, we noted that by taking advantage of readout outcomes, we may accelerate the convergence.
Furthermore, for qutrit functionality, we calibrate qutrit gates using the pulse-level programming model Qiskit Pulse via cloud access to IBM Quantum devices.
Traditionally, the fidelity of an initialized state and the fidelity of a quantum gate are considered independently.
We demonstrate that by utilizing the programmability of a digital quantum processor, arbitrary quantum state can be prepared via a simple protocol of repeatedly executing the same small set of quantum gates. The success of the protocol -- achieving high state initialization fidelity -- depends primarily on the fidelity of the quantum gates and stability of qubits. Therefore, from a quantum engineers point of view, the task of state preparation may be considered a byproduct of achieving high gate fidelity.
Additionally, we demonstrate state preparation of a qutrit, escaping the conventional notation of a binary quantum system.
From a quantum technology point of view, the ability to access more quantum information in higher dimensions has direct advantages in quantum error-correcting codes, as well as asymptotic improvements in computation in comparison with binary computation.
Traditional control of a qutrit introduces further engineering overhead, such as careful calibration of drive frequency, drive amplitude, and phases.
From a device design standpoint, several compromises need to be made, including speed of readout versus the coherence of a qutrit.
However, for the task of qutrit state preparation via steering, a specific subclass of qutrit gates is needed to prepare an arbitrary state which lowers the engineering overhead. We demonstrated the necessary calibrations and executions of qutrit gates on superconducting transmons to prepare an equal-superposition qutrit state. We believe this research paves a path to reliably prepare higher-dimensional quantum states on experimental platforms.
Future work in utilizing steering for state preparation on experimental quantum devices consists of several challenges and possible directions:
\textit{Entangled-state preparation:} highly-entangled states are crucial for implementing error-correcting codes and performing quantum information processing. However, preparing an arbitrary entangled state via steering requires appropriately coupling to measurement-capable ancilla qubits. Contemporary superconducting quantum devices have restrictive device connectivity between qubits, which introduces additional overhead to transfer quantum information (i.e. via SWAP). Trapped ion quantum computers may be better suited for this task due to all-to-all coupling between qubits. Unfortunately, compared to superconducting qubits, measurement operations on trapped ion qubits are more disruptive due to stray light \cite{gaeblerSuppressionMidcircuitMeasurement2021}. Assessing the feasibility of steering on various contemporary hardware platforms remains an open challenge.
\textit{Device-specific measurement:} it is rarely the case that measurements are conducted on a qubit directly. Instead, measurement typically observes what effect a system $\ket{\psi}$ has on an environment. Generally, the system is coupled with an apparatus $\ket{\theta}$ to give an overall state $\ket{\Psi} = U\ket{\theta}\otimes\ket{\psi}$ after an entangling operation $U$. Then a measurement is conducted on the apparatus which disentangles it from the system. For example, superconducting transmon qubits are measured through a readout resonator which couples with the transmon. A frequency shift of the resonator is observed depending on the state of the transmon \cite{mcclureRapidDrivenReset2016}. Therefore, assuming an appropriate entanglement $U$, it is possible to utilize quantum steering to prepare arbitrary system quantum states by coupling and measuring an apparatus -- thereby reducing the overall use of expensive qubits to act as ancillas.
\textit{Parameterized quantum algorithms:} many near-term quantum algorithms utilize parameterized quantum circuits to prepare quantum states such that an expectation value is minimized \cite{peruzzoVariationalEigenvalueSolver2014a}. Unfortunately, parameterized circuits suffer from barren plateaus whereby a classical optimizer is unable to solve the high-dimensional non-convex optimization \cite{grantInitializationStrategyAddressing2019a}. Quantum steering provides theoretical guarantee to state initialization, and may overcome pitfalls in traditional parameterized quantum circuits. Namely, active steering provides a feedback mechanism whereby the optimization may be aided by conducting local decisions rather than finding a global optimal directly.
\textit{Steering quantum gates:} certain systems contain a dark space that is spanned by several dark states.
A closed (non-)adiabatic trajectory can be used to induce a unitary operator in the dark space \cite{wilczekAppearanceGaugeStructure1984a, snizhkoNonAbelianGeometricDephasing2019a}.
In other words, the generalization of the Berry phase -- a non-abelian holonomy -- can be used to realize quantum gates \cite{zanardiHolonomicQuantumComputation1999b}.
An intriguing direction is to study the role that a steering protocol may play in realizing quantum gates via a holonomy.
\vspace{-0.2in}
\begin{acknowledgments}
\vspace{-0.1in}
The authors gratefully thank the IBM Quantum team and the services offered through the IBM Quantum Researchers Program.
The authors also acknowledge support from the National Science Foundation, Grant No. CCF-1908131.
\end{acknowledgments}
|
{
"timestamp": "2023-02-28T02:22:34",
"yymm": "2302",
"arxiv_id": "2302.13518",
"language": "en",
"url": "https://arxiv.org/abs/2302.13518"
}
|
\section{Introduction}
\label{sec:intro}
The problem of scene modeling\cite{kolmogorov2002multi, dyer2001volumetric} is a prominent pillar in the field of computer vision with applications ranging from novel view synthesis\cite{avidan1997novel, daribo2010depth}, augmented and virtual reality\cite{azuma1997survey, burdea2003virtual}, SLAM\cite{grisetti2010tutorial, mur2015orb}, and many more. Particularly under static scene conditions, NeRF \cite{mildenhall2021nerf} has recently exhibited remarkable progress in synthesizing photorealistic novel views from sparse 2D images.
The hallmark of NeRF is the architectural bias of neural networks. That is, the natural (Lipschitz) smoothness of neural functions acts as an implicit \emph{neural prior}. This property imposes self-regularization\cite{schwarz2020graf, chan2021pi, karras2021alias} to otherwise ill-posed problems \cite{zhang2020nerf}. Recently, multiple works have attempted to extend NeRF to dynamic settings \cite{pumarola2021d, tretschk2021non, xian2021space, wang2021neural, johnson2022unbiased, gao2021dynamic, xu2022deforming, martin2021nerf, gao2020portrait}, leveraging these neural priors that made NeRFs successful. However, real-world dynamic scenes generally violate the analytical conveniences of multi-view geometry\cite{hartley2003multiple}. In this vein, dynamic NeRF works have primarily resorted to using ray deformation paradigms \cite{newcombe2015dynamicfusion} for addressing geometric inconsistencies \cite{pumarola2021d, tretschk2021non, park2021nerfies, li2021neural, gao2021dynamic, xian2021space}. Although these approaches have yielded remarkable results, we show that their over-reliance on the smoothness of neural priors cause fundamental problems; using a neural network to simultaneously model both time and space is detrimental to accurate scene modeling, as space typically consists of sharp/high-frequency details, whereas temporal dynamics are naturally smooth and continuous (see Sec.~ \ref{sec:ray_bending}).
On the other hand, the roots of dynamic scene modeling more classically extend to the problem of non-rigid structure from motion (NRS\textit{f}M\xspace). In summary, NRS\textit{f}M\xspace concerns recovering sparse 3D point deformations of a scene from 2D point correspondences between multiple 2D projections. Similar to the dynamic NeRF, NRS\textit{f}M\xspace setting is also severely underconstrained. In contrast to NeRF, however, NRS\textit{f}M\xspace literature is heavily focused on formulating explicit priors to convert this ill-posed problem to a well-defined one. The performance of NRS\textit{f}M\xspace models mainly depends on the alignment of these priors with the deformation in question. Thus, since the early work of Bregler \etal \cite{bregler2000recovering}, which presented a classic row-rank factorization approach, a plethora of studies have explored different priors on shape space \cite{torresani2001tracking, torresani2003learning, rabaud2008re}, point trajectories \cite{akhter2008nonrigid, gotardo2011non, gotardo2011kernel}, or subspaces \cite{kumar2017spatio, agudo2017dust}.
The central thesis of this paper aims at presenting a generic framework to combine the strengths of implicit neural priors of NeRFs and well-designed explicit priors that are deeply rooted in NRS\textit{f}M\xspace literature. To this end, we model the light and density fields of a 3D scene as bandlimited, high-dimensional signals. This particular standpoint enables complete factorization of spatio-temporal dynamics, allowing us to inject explicit priors on the time and space dynamics independently. To demonstrate the practical utility of our framework, we offer an example implementation that enforces 1) a low-rank constraint on the shape space, along with 2) a neural prior and 3) a union-of-subspace prior on the time space. We show that the strong regularization effects entwined with these priors enable our model to reconstruct long-range dynamics and localize motion accurately, only using sparse RGB images for supervision. Our contributions are three-fold:
\begin{itemize}[topsep=-2pt,itemsep=2pt]
\item We show that existing mainstream extensions of NeRF to dynamic scenes suffer from critical drawbacks, primarily due to their over-reliance on neural priors.
\item We propose a generic framework that enables full factorization of space and time by formulating radiance fields as bandlimited signals. We only utilize RGB images from a monocular camera for supervision.
\item We empirically validate the efficacy of our framework by demonstrating better modeling of long-range dynamics, motion localization, and light/texture changes, compared to the baselines with more than $10 \times$ faster training times.
\end{itemize}
\section{Introduction}
\label{sec:intro}
The problem of scene modeling\cite{kolmogorov2002multi, dyer2001volumetric} is a prominent pillar in the field of computer vision with applications ranging from novel view synthesis\cite{avidan1997novel, daribo2010depth}, augmented and virtual reality\cite{azuma1997survey, burdea2003virtual}, SLAM\cite{grisetti2010tutorial, mur2015orb}, and many more. Particularly under static scene conditions, NeRF \cite{mildenhall2021nerf} has recently exhibited remarkable progress in synthesizing photorealistic novel views from sparse 2D images.
The hallmark of NeRF is the architectural bias of neural networks. That is, the natural (Lipschitz) smoothness of neural functions acts as an implicit \emph{neural prior}. This property imposes self-regularization\cite{schwarz2020graf, chan2021pi, karras2021alias} to otherwise ill-posed problems \cite{zhang2020nerf}. Recently, multiple works have attempted to extend NeRF to dynamic settings \cite{pumarola2021d, tretschk2021non, xian2021space, wang2021neural, johnson2022unbiased, gao2021dynamic, xu2022deforming, martin2021nerf, gao2020portrait}, leveraging these neural priors that made NeRFs successful. However, real-world dynamic scenes generally violate the analytical conveniences of multi-view geometry\cite{hartley2003multiple}. In this vein, dynamic NeRF works have primarily resorted to using ray deformation paradigms \cite{newcombe2015dynamicfusion} for addressing geometric inconsistencies \cite{pumarola2021d, tretschk2021non, park2021nerfies, li2021neural, gao2021dynamic, xian2021space}. Although these approaches have yielded remarkable results, we show that their over-reliance on the smoothness of neural priors cause fundamental problems; using a neural network to simultaneously model both time and space is detrimental to accurate scene modeling, as space typically consists of sharp/high-frequency details, whereas temporal dynamics are naturally smooth and continuous (see Sec.~ \ref{sec:ray_bending}).
On the other hand, the roots of dynamic scene modeling more classically extend to the problem of non-rigid structure from motion (NRS\textit{f}M\xspace). In summary, NRS\textit{f}M\xspace concerns recovering sparse 3D point deformations of a scene from 2D point correspondences between multiple 2D projections. Similar to the dynamic NeRF, NRS\textit{f}M\xspace setting is also severely underconstrained. In contrast to NeRF, however, NRS\textit{f}M\xspace literature is heavily focused on formulating explicit priors to convert this ill-posed problem to a well-defined one. The performance of NRS\textit{f}M\xspace models mainly depends on the alignment of these priors with the deformation in question. Thus, since the early work of Bregler \etal \cite{bregler2000recovering}, which presented a classic row-rank factorization approach, a plethora of studies have explored different priors on shape space \cite{torresani2001tracking, torresani2003learning, rabaud2008re}, point trajectories \cite{akhter2008nonrigid, gotardo2011non, gotardo2011kernel}, or subspaces \cite{kumar2017spatio, agudo2017dust}.
The central thesis of this paper aims at presenting a generic framework to combine the strengths of implicit neural priors of NeRFs and well-designed explicit priors that are deeply rooted in NRS\textit{f}M\xspace literature. To this end, we model the light and density fields of a 3D scene as bandlimited, high-dimensional signals. This particular standpoint enables complete factorization of spatio-temporal dynamics, allowing us to inject explicit priors on the time and space dynamics independently. To demonstrate the practical utility of our framework, we offer an example implementation that enforces 1) a low-rank constraint on the shape space, along with 2) a neural prior and 3) a union-of-subspace prior on the time space. We show that the strong regularization effects entwined with these priors enable our model to reconstruct long-range dynamics and localize motion accurately, only using sparse RGB images for supervision. Our contributions are three-fold:
\begin{itemize}[topsep=-2pt,itemsep=2pt]
\item We show that existing mainstream extensions of NeRF to dynamic scenes suffer from critical drawbacks, primarily due to their over-reliance on neural priors.
\item We propose a generic framework that enables full factorization of space and time by formulating radiance fields as bandlimited signals. We only utilize RGB images from a monocular camera for supervision.
\item We empirically validate the efficacy of our framework by demonstrating better modeling of long-range dynamics, motion localization, and light/texture changes, compared to the baselines with more than $10 \times$ faster training times.
\end{itemize}
\section{Related Work}
\label{sec:related}
\vspace{-0.5em}
Most successful methods for modeling dynamic scenes require either a setup containing multiple cameras\cite{zhang2003spacetime, tung2009complete, zhang2004spacetime, dou2013scanning, dou2016fusion4d} or active depth sensors\cite{newcombe2011kinectfusion, newcombe2015dynamicfusion, slavcheva2017killingfusion, yu2017bodyfusion}. In contrast, recovering the 3D structure of a scene using a monocular camera is a more challenging task that has been approached from various angles \cite{newcombe2011kinectfusion, yoon2020novel, dou2016fusion4d, avidan2000trajectory, wexler2000synthesis, niemeyer2019occupancy, li2021neural}. However, this paper only focuses on NeRF extensions and NRS\textit{f}M\xspace.
\noindent \textbf{NRS\textit{f}M\xspace.} The problem of NRS\textit{f}M\xspace focuses on modeling the 3D structure of sparse points using their 2D projections. To convert this problem to a well-defined one, various priors have been explored. These priors can be mainly categorized as shape-based and trajectory-based priors.
Breger \etal~\cite{bregler2000recovering}, in their seminal work, argued that NRS\textit{f}M\xspace could be solved using a finite number of low-rank shape-basis functions\cite{garg2013dense}. Later, Torrensani \etal \cite{torresani2001tracking} modeled the coefficients of the shape-basis as a linear dynamical system. In contrast, Rabaud \etal \cite{rabaud2008re} proposed to learn a smooth manifold of shape configurations, and Gotardo \etal \cite{gotardo2011kernel} explored non-linear shape models using kernels. More recently, Agudo \etal \cite{agudo2017dust} imposed a union-of-subspace prior to constrain the shape deformations. Another interesting work revealed that learning shape deformations can be formulated as a block sparse dictionary learning problem \cite{kong2016prior}. Considering trajectory-based priors, Akhter \etal~\cite{akhter2008nonrigid} demonstrated that instead of decomposing the shape deformation over time with basis functions, the trajectory of measurements could be formulated as DCT basis functions. In the same spirit, \cite{zhu2013convolutional} exploited the convolutional structure of the trajectories. Multiple works ~\cite{kumar2017spatio, agudo2017dust, zhu2014complex, zappella2013joint} showed that frames could be clustered to restrict trajectories within low-dimensional subspaces. This closely aligns with the manifold prior we propose in Sec.~\ref{sec:manifold}. Multiple works have also sought to explicitly regularize trajectories by minimizing their response to high-pass filters \cite{valmadre2012general}, injecting rigid key-frames~\cite{zhu20113d}, enforcing sparsity priors \cite{salzmann2011physically}, and considering articulated motion~\cite{park20113d}. Nonetheless, NRS\textit{f}M\xspace typically deals with sparse 3D points; in contrast, we focus on novel view synthesis, which requires reasoning dense 3D structure.
\noindent \textbf{Dynamic NeRF.} Inspired by the success of NeRF, many studies have attempted to model dynamic neural radiance fields using the concept of ray deformation \cite{pumarola2021d, tretschk2021non, park2021nerfies, li2021neural, gao2021dynamic}. D-NeRF\cite{pumarola2021d} was the first among the above to propose a general framework which learns a displacement per continuous point, from a given radiance field to a canonical one. Both\cite{tretschk2021non, gao2021dynamic} extended this idea, and further introduced a constraint to model the foreground and background separately, thus allowing quicker convergence and a better-constrained search space. \cite{gao2021dynamic} introduced a method to disambiguate self-occlusions that hinders the performance of these approaches. Nerfies\cite{park2021nerfies} achieves remarkable results on novel views synthesis of dynamic scenes by incorporating elastic regularization, but specifically target self-portraits. Finally, several other NeRF extensions have also been proposed that require depth estimates \cite{xian2021space}, optical flows \cite{wang2021neural}, foreground masks~\cite{johnson2022unbiased, gao2021dynamic}, meshes~\cite{xu2022deforming}, or assume that dynamic objects are distractors to remove \cite{martin2021nerf}. In Sec.~\ref{sec:ray_bending}, we critically analyze ray deformation approaches, and in fact, show that these methods model deformations of the light and density fields over time, instead of rays.
\section{Revisiting ray deformation networks}
\label{sec:ray_bending}
Extending NeRF to dynamics scenes fundamentally involves representing the scene as a continuous function with 6D inputs $(x,y,z,\theta, \phi, t)$, where $t$ is the time and $(\theta, \phi)$ is the viewing direction. However, it has been empirically validated\cite{pumarola2021d} that employing a single MLP that learns a mapping from 6D inputs to density and color fields yields sub-optimal results. Hence, existing works decompose the aforementioned task into two modules \cite{pumarola2021d, tretschk2021non, park2021nerfies, li2021neural, gao2021dynamic}: 1) the first MLP learns a warping field of 3D points $(\Delta x, \Delta y, \Delta z)$ sampled along the rays with respect to a canonical setting; 2) the second module then acts similarly to the original NeRF formulation, regressing the density and light fields given the warped samples along the rays $(x + \Delta x, y + \Delta y, z + \Delta z)$. Since the warping is applied to points sampled along the ray, this formulation is interpreted as deforming the rays as a function of time. Further, note that a rudimentary assumption here is that the objects do not enter or leave the scene, and the lighting/texture is consistent.
However, we notice that existing implementations of this framework do not adhere to these constraints (see \supprefshort{suppsec:ray_deformation}). Specifically, we show that such networks can indeed model light and density changes separately (to an extent), which is infeasible with a model that only learns ray deformations (see Fig.~\ref{fig:real} and Fig.~\ref{fig:synthetic}). However, to avoid confusion, we will keep referring to this class of models as ray deformation networks. Next, we discuss several critical limitations of them.
\subsection{Limitations of ray deformation networks}
\vspace{-9pt}
\label{subsec:limitations}
In this section, we present a brief exposition of the limitations entailed with ray deformation networks. For an extended analysis, refer to \supprefshort{suppsec:ray_deformation}.
\noindent \textbf{Dependency on a canonical frame: }Ray deformation networks require choosing a canonical frame arbitrarily, where most models commonly choose the frame at $t=0$ to this end. However, this choice can significantly harm the model performance in cases where 1) objects or the camera are subjected to long-range translations, and 2) new objects appear in future. In both cases, the canonical frame at $t=0$ needs to learn an average scene representation where all future information is present, which becomes increasingly infeasible as the scene becomes more complex. On the other hand, the model also needs to preserve the continuity; the model output at $(t=\delta t)$ needs to be a smooth transition of the canonical scene at $t=0$, which can be impractical if the scene comprises abundant future information. In contrast, our framework does not depend on a canonical scene.
\noindent \textbf{Entanglement of light and density fields: } Although ray deformation networks are able to deform the light and density fields, they are still highly entangled. More precisely, it can be shown that in order to achieve complete disentanglement of the light and density fields, the network needs to preserve a specific block-diagonal Jacobian structure in one of the hidden layers, which is an extremely restrictive requirement. In comparison, our framework achieves complete disentanglement by design, modeling the light and density fields independently.
\noindent \textbf{Limited expressiveness: } Ray deformation networks comprise a bottleneck of dimension three. Therefore, each of the density and light fields modeled by this network becomes three dimensional manifolds. Thus, they cannot encode complex dynamics that needs to be parameterized by four variables $(x,y,z,t)$ simultaneously.
\noindent \textbf{Substandard separation of background and motion: } Ray deformation networks model the warp field using a single MLP. However, this is a substandard design choice since the space and time variations have different spectral properties. For instance, the space may contain high-fidelity details, and in contrast, time dynamics are generally smoother. Therefore, using an MLP with a particular bandwidth for learning both spatial and time variations together leads to sub-optimal reconstructions. On the contrary, our framework enables factorization of space and time dynamics, allowing better separation of static and dynamic regions. Next, we formally present our framework.
\section{Our framework}
\label{sec:method}
\begin{figure*}[t]
\centering
\includegraphics[width=0.85\textwidth]{figs/arch.pdf}
\caption{\textbf{The proposed implementation of our framework.} We treat the light and density fields as bandlimited, high-dimensional signals (only a single field is shown in the figure). The time evolution of each 3D point $(x,y,z)$ of the field is modeled as a finite linear combination of time-basis functions $\{\beta_j(t)\}$. The coefficients of the $\{\beta_j(t)\}$ are decomposed into outer products between learnable matrices ($\mathbf{M}$) and vectors ($\mathbf{v}$). This decomposition is inspired \cite{chen2022tensorf}. Our formulation allows efficient factorization of time and space dynamics, leading to high-quality reconstructions of complex dynamics, along with faster convergence. }
\label{fig:arch}
\end{figure*}
Consider a set of 2D projections $\{I(t_n)\}_{n=1}^N$ of a 3D scene captured from a moving camera. For brevity, we drop the dependency on the camera poses from the notation. Without the loss of generality, we assume that the scene is bounded within a cube with side length $D$. We begin by observing that there exists a latent density and color field corresponding to each $I(t_n)$, which can be discretized into a cubic grid of $D^3$ nodes. Then, rewriting the latent states of either field in the matrix form, gives
\begin{equation}
\resizebox{\columnwidth}{!}{%
$\mathbf{S} = \begin{bmatrix}
s(t_1, x_1, y_1, z_1) & s(t_1, x_2, y_2, z_2) & \dots & s(t_1, x_{D^3}, y_{D^3}, z_{D^3}) \\
\vdots & \vdots & \ddots & \vdots\\
s(t_N, x_1, y_1, z_1) & s(t_N, x_2, y_2, z_2) & \dots & s(t_N, x_{D^3}, y_{D^3}, z_{D^3}) \\
\end{bmatrix}_{N \times D^3}
}
\end{equation}
where $s(t_i, x_i, y_i, z_i)$ can be either the density or the emitted color values at $(x_i,y_i,z_i)$ point at time $t_i$.
\subsection{Memorization of latent states}
\label{sec:memorization}
Let $\mathrm{rank}(\mathbf{S}) = K \leq N$ (assuming $N < D^3$). Then, there exist $K$ basis vectors, each with dimension $D^3$, that can perfectly reconstruct (memorize) $\mathbf{S}$. More precisely, in this case, each row of $\mathbf{S}$ can be reconstructed as
\begin{equation}
\mathbf{S} (t_i, \cdot ) = \sum_{j=1}^K a_j(t_i) \hat{\boldsymbol{\alpha}}_j,
\end{equation}
where $\mathbf{S} (t_i, \cdot )$ is the $i^{th}$ row of $\mathbf{S}$, $\{ \hat{\boldsymbol{\alpha}}_j \}_{j=1}^K$ are basis vectors of dimension $D^3$, and $\{ a_j \}_{j=1}^K$ are scalar coefficients. Intuitively, each row of $\mathbf{S}$ corresponds to a snapshot of the field in space at a particular time instance. On the contrary, each column of $\mathbf{S}$ are snapshots of the time evolution of a particular $(x,y,z)$ point in the field. We note an interesting duality here; since the dimension of the row space and the column space of $\mathbf{S}$ are equal, it should be possible to reconstruct the time evolution of the density/color value of each $(x,y,z)$ position also using a $K$ number of basis vectors. Thus, we model the time evolution of each point as
\vspace{-0.5em}
\begin{equation}
\mathbf{S}(x_i,y_i,z_i, \cdot) = \sum_{j=1}^K b_{j}(x_i,y_i,z_i)\hats{\boldsymbol{\beta}}_j,
\label{eq:time-memorization}
\end{equation}
where $\{\hats{\boldsymbol{\beta}}_j\}_{j=1}^{K} \in \mathbb{R}^N$ are basis vectors and $\{ b_j \}_{j=1}^K$ are scalars.
This change of perception is crucial for generalizing to unseen time instances and obtaining a space-time factorization, as we show in Sec.~\ref{sec:bandlimied}.
\subsection{Bandlimited fields and generalization}
\label{sec:bandlimied}
\vspace{-0.5em}
In Sec.~\ref{sec:memorization}, we established that the imposition of a low-rank assumption on the time-evolving field allows us to recover a set of observations using a finite number of basis vectors. However, recall that, in practice, only a sparse set of 2D observations $\{I(t_n)\}_{n=1}^N$ are at our disposal. Therefore, memorization is not sufficient, and the framework should be able to generalize to unseen time instances. To achieve this, we employ an important assumption here that the \emph{light and density fields are bandlimited signals}. \footnote{Note that we use the general notion of bandlimitedness here; a signal is bandlimited if, and only if, it can be reconstructed using a finite set of basis functions.} This particular assumption enables us to convert $\{\hats{\boldsymbol{\beta}}_j$\} in Eq.~\ref{eq:time-memorization} to continuous time-dependent functions, thereby obtaining a continuous time-evolving field,
\vspace{-0.5em}
\begin{equation}
\mathbf{S}(x,y,z,t) = \sum_{j = 1}^K \beta_j(t) b_j(x,y,z).
\label{eq:bandlimited}
\end{equation}
Observe that under this view, $\{\hats{\boldsymbol{\beta}}_j\}$ can be considered as discrete samples of the continuous functions $\{\beta_j(t)\}$. Further, Eq.~\ref{eq:bandlimited} provides a nice factorization of time and spatial dynamics, allowing us to impose priors on time and space independently. In Sec.~\ref{sec:implementation}, we present an implementation of the proposed framework. In this implementation, we inject a low-rank prior on space, along with smoothness and compact manifold priors on time. It is worth to note that our framework is generic enough to support alternative implementations and more complex priors, which we leave to future explorations.
\subsection{Implementation}
\vspace{-0.5em}
\label{sec:implementation}
Leveraging the factorization we achieved in Eq.~\ref{eq:bandlimited}, we can formulate the entire 3D field volume as a time-dependent higher dimensional signal, that can be decomposed into a linear combination of 3D tensors $\mathbf{\mathcal{A}}^{xyz}_j \in \mathbb{R}^{D \times D \times D}$:
\vspace{-0.5em}
\begin{equation}
\mathcal{S}(t) = \sum_{j=1}^K\beta_j(t) \mathbf{\mathcal{A}}^{xyz}_j,
\end{equation}
where $\mathcal{S}(t) \in \mathbb{R}^{D \times D \times D}$ is the state of the field at time $t$. Note that we adopt the tensor notation here where the superscripts denote the dimensions, \textit{i.e.}, $x = 1, \dots D, y = 1, \dots, D$, and $z = 1, \dots, D$.
To regularize the spatial variations, we employ a low-rank constraint on $\mathbf{\mathcal{A}}_j$ as,
\vspace{-0.5em}
\begin{equation}
\mathcal{S}(t) = \sum_{j=1}^K\beta_j(t) ( \mathbf{v}^z_j \otimes \mathbf{M}_j^{xy} + \mathbf{v}^x_j \otimes \mathbf{M}_j^{yz} + \mathbf{v}^y_j \otimes \mathbf{M}_j^{xz}),
\label{eq:decompose}
\end{equation}
where $\mathbf{v}_j \in \mathbb{R}^D$ and $\mathbf{M}_j \in \mathbb{R}^{D\times D}$ are one- and two-dimensional tensors, respectively, and $\otimes$ is the outer product. The above choice of factorization is inspired by the \emph{VM-decomposition} proposed in \cite{chen2022tensorf}. This factorization accomplishes two goals: 1) enforcing a low rank constraint on the spatial variations of the field, and 2) significantly reducing the size of the model and the number of trainable parameters. We note that such low-rank priors have been widely employed in the NRS\textit{f}M\xspace literature for the same purpose \cite{torresani2001tracking, torresani2003learning, rabaud2008re}.
\subsection{Neural trajectory basis}
\vspace{-0.5em}
In theory, it is possible to use any class of functions that form a complete basis in $L^2(\mathbb{R}, dt)$ as $\{\beta_j(t)\}$. Several such popular choices include the DCT basis, Fourier basis, and Bernstein basis, among many others.
Nonetheless, we use neural networks to parameterize our basis functions, leveraging the implicit architectural smoothness constraint built into them. We dub these basis functions as \emph{neural trajectory basis}. Neural trajectory basis present an important, implicit prior to our model that the field values should evolve smoothly. We also empirically noted that neural basis functions are naturally more expressive and adaptive as they are learned end-to-end, as opposed to other choices (see Table~\ref{tab:abl-basis}). Expressiveness is crucial, as it is desirable to model the dynamics of each point with a minimal number of basis functions. Thus, we compute $\{\beta_j(t)\}$ via an MLP $\mathcal{F}(t): \mathbb{R} \to \mathbb{R}^K$ as,
\vspace{-0.5em}
\begin{equation}
\mathcal{F}(t) = [\beta_1(t), \beta_2(t), \dots, \beta_K(t)].
\end{equation}
We also show that the smoothness prior embedded into the neural trajectory basis closely aligns with the work of Valmadre \etal \cite{valmadre2012general}, where they showed that, in NRS\textit{f}M\xspace, trajectory's response to high-pass filters should be minimal. We validate that neural trajectory basis implicitly preserve this property (see \supprefshort{suppsec:neural_basis}).
\subsection{Manifold Regularization}
\vspace{-0.5em}
\label{sec:manifold}
Multiple works in NRS\textit{f}M\xspace have explored restricting the subspace of dynamics in order to obtain better reconstructions. The high-level objective is to temporally cluster the motion in order to restrict similar dynamics into a low-dimensional subspace \cite{kumar2017spatio, agudo2017dust, zhu2014complex, zappella2013joint}. We observed that such a constraint can improve our reconstructions also.
More formally, we empirically asserted that better results are obtained by locally restricting the dimension of the submanifold that $\mathcal{S}(t)$ is immersed in.
Instead of clustering the motion across the entire sequence, we assume that dynamics are locally compact: movements that occur over a small time period can be described using a smaller subspace. To enforce this constraint, we adopt the following procedure.
Observe that $\frac{\partial \mathcal{S}(t)}{\partial t}$ exists for all $t$. Also, for a scene with (at least locally) continuously deforming light and density fields, we make the fair assumption that there exists a bijection from the time domain to $\mathcal{S}(t)$, \ie, $\mathcal{S}(t_1) = \mathcal{S}(t_2) \Leftrightarrow t_1 = t_2$. Further, the space $\mathcal{S}(t)$ is a Hausdorff space and the domain of $\mathcal{S}(t)$ is compact. Recall the
following theorem.
\noindent \textbf{Theorem:} \textit{Continuous bijection from a compact space to a Hausdorff space is a homeomorphism.}
Therefore, $\mathcal{S}(t)$ is a $1$-dimensional manifold embedded in a $D^3$-dimensional space, and its local coordinate chart is a compact subspace in $\mathbb{R}$. Further, at any given time $t$, $\mathcal{S}(t)$ is a linear combination of $K$ points $\{ \mathbf{v}^z_j \otimes \mathbf{M}_j^{xy} + \mathbf{v}^x_j \otimes \mathbf{M}_j^{yz} + \mathbf{v}^y_j \otimes \mathbf{M}_j^{xz} \}_{j=1}^K \in \mathbb{R}^{D \times D \times D}$. Therefore,
$\mathcal{S}(t)$ is a submanifold of $\mathbb{R}^K$.
Now, let
$\mathbf{P}_j^{xyz} = (\mathbf{v}_j^{z} \otimes \mathbf{M}_j^{xy} + \mathbf{v}^x_j \otimes \mathbf{M}_j^{yz} + \mathbf{v}^y_j \otimes \mathbf{M}_j^{xz})
$. Suppose the dimension of the local submanifold we need is $W$, such that $K = dW$ for some integer $d$. Then, we define the 4D tensor $\mathbf{Q}_{j:j+W}^{xyzu} \in \mathbb{R}^{D \times D \times D \times W}$ such that
$
\mathbf{Q}_{j:j+W}^{xyzu} = \{\mathbf{P}_u^{xyz}\}_{u=j}^{j+W}.
$
Next, we obtain
\vspace{-0.5em}
\begin{equation}
\resizebox{\columnwidth}{!}{%
$\mathbf{\Tilde{Q}}^{xyzu} (t) = \sum\limits_{n=0}^{d-1} \mathbf{Q}_{(nW + 1):W(n+1)}^{xyzu} \odot \mathrm{sinc} \big( (d-1)(t - \frac{n}{(d-1)}) \big),$%
}
\end{equation}
where $\odot$ is the element-wise multiplication, and $\mathrm{sinc}(r) =
\begin{cases}
1, & \text{if } r = 0\\
\frac{\mathrm{sin}(r)}{r}, & \text{otherwise}
\end{cases}$. The choice of the sinc function here is not arbitrary, and is crucial for the smooth transition between submanifolds as the time progresses. More precisely, the sinc interpolation ensures that no higher frequencies than $(d-1)/2$ can be presented in $\mathbf{\Tilde{Q}}^{xyzu} (t)$ along the temporal dimension. Finally, we can obtain the regularized field as,
\vspace{-0.5em}
\begin{equation}
\mathcal{\Tilde{S}}(t) = \sum_{u=1}^W\beta_u(t)\mathbf{\Tilde{Q}}^{xyzu} (t).
\label{eq:manifold}
\end{equation}
From a strict theoretical perspective, one can argue that Eq.~\ref{eq:manifold} violates the time and space factorization we obtained in Eq.~\ref{eq:decompose}. However, in practice, the sinc interpolation ensures that $\mathbf{\Tilde{Q}}^{xyzu} (t)$ is locally almost constant as long as we choose $d$ to be suitably small, as $\mathbf{\Tilde{Q}}^{xyzu} (t)$ cannot then have higher frequencies than $(d-1)/2$. Further, Eq.~\ref{eq:manifold} ensures that $\mathcal{\Tilde{S}}(t)$ can only locally traverse within an $\mathbb{R}^W$ subspace where $W < K$, which is a more regularized setting than Eq.~\ref{eq:decompose}, where $\mathcal{S}(t)$ is allowed to traverse within an $\mathbb{R}^K$ subspace.
\subsection{Volume Rendering}
\vspace{-0.5em}
Let us denote $\mathcal{C}_{\mathbf{x}}(t)$ and $\mathcal{Z}_{\mathbf{x}}(t)$ as continuously evolving light and density fields, respectively, obtained via Eq.~\ref{eq:manifold} and queried at 3D position $\mathbf{x}$. We can obtain density and light values at any $\mathbf{x}$ at time $t$ as,
\vspace{-0.5em}
\begin{equation}
\sigma(\mathbf{x},t), c(\mathbf{x},t) = \mathcal{Z}_{\mathbf{x}}(t), \mathcal{C}_{\mathbf{x}}(t).
\end{equation}
To compute the above values at an arbitrary continuous position $\mathbf{x}$, we tri-linearly interpolate the grids. Then, the rendering is done similarly to the original NeRF formulation: let $\mathbf{x}(h) = \mathbf{o} + h\mathbf{d}$ be a 3D location sampled on the ray emitted from camera center $\mathbf{o}$ in the direction of $\mathbf{d}$, passing through a pixel $p$. We can obtain the predicted pixel color $\Tilde{p}$ at a given time instance $t$ as,
\vspace{-0.5em}
\begin{equation}
\Tilde{p}(t) = \int T(\sigma(\mathbf{x}(h),t), h) \sigma(\mathbf{x}(h),t) c(\mathbf{x}(h),t)dh
\end{equation}
where $T(\cdot) = \exp \big(- \int_{-\infty}^{\mathbf{x}(h)} \sigma(\mathbf{x}(h),t)dh \big)$.
We use the same discrete approximations used in \cite{mildenhall2021nerf} for the above formulas in practice. The final loss $\mathcal{L}$ used for training is the mean squared loss between $p$ and $\Tilde{p}$, along with a total variation (TV) loss spatially applied across grid values:
\vspace{-0.5em}
\begin{equation}
\mathcal{L} = \frac{1}{N}\sum_{i=1}^{N}\|p(t) - \Tilde{p}(t)\| + \lambda_1 TV(\mathcal{Z}(t)) + \lambda_2 TV(\mathcal{C}(t)).
\label{eq:loss}
\end{equation}
Two important remarks are in order: \textit{a}) our model only requires the TV loss as a loss regularizer, as opposed to multiple explicit regularizations that are used in many existing dynamic NeRF architectures such as explicit foreground-background modeling\cite{tretschk2021non, gao2021dynamic}, energy-preservation \cite{park2021nerfies}, or temporal consistency losses \cite{li2021neural, wang2021neural}. \textit{b}) To address the insufficiency of neural priors in regularizing the architecture, many dynamic NeRF methods tend to adopt cumbersome training procedures to converge to a good minimum, \textit{e.g.}, sequential training of temporally-ordered frames \cite{pumarola2021d, li2021neural}, coarse-to-fine annealing of hyperparameters \cite{park2021nerfies}, or morphology processing \cite{yoon2020novel}. In contrast, we simply randomly sample points in time and space and feed them to the model for training. We argue that this is a strong indicator of the well-built inductive bias/implicit regularization of our architecture and the stability of our formulation.
\section{Experiments}
\label{sec:experiments}
\vspace{-0.2em}
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.9\textwidth]{figs/synthetic_vertical_t.pdf}
\vspace{-1em}
\caption{\textbf{Qualitative comparison on the synthetic dataset.} The shown reconstructions are from novel views at unseen time instances. As evident, both D-NeRF and NR-NeRF fail to accurately infer the 3D structure of the scenes containing texture and lighting changes (columns $1,2,5,6$). This behavior is caused by their inability to precisely disentangle light and density fields (see Sec.~\ref{subsec:limitations}). In contrast, T-NeRF performs relatively well in these scenes as it achieves this disentanglement by construction. However, all the three baselines exhibit poor reconstructions in the scale change and ball move scenes (columns $3,4,7,8$). This is an illustration of the sub-optimal localization of motion caused by inferior factorization of time and space, that are built into these models. Further, note that the objects in all the scenes are slightly misaligned in baseline reconstructions, demonstrating sub-par disentanglement between scene and camera dynamics. In comparison, our model is yields significantly better reconstructions, demonstrating its better formulation with respect to the above aspects.}
\label{fig:synthetic}
\end{figure*}
\input{tab-quant-our}
In this section, we empirically validate the efficacy of our proposed framework.
\noindent \textbf{Datasets: } We collect four synthetic scenes and four real-world scenes as our dataset. The synthetic scenes include texture changes, lighting changes, scale changes, and long-range movements. Similarly, the real-world scenes include lighting changes, long-range movements, and spatially concentrated dynamic objects. All the scenes consist of RGB images captured from a single moving camera along with camera poses. For more details on our datasets, see \supprefshort{suppsec:datasets}.
\noindent \textbf{Baselines:} We choose D-NeRF~\cite{pumarola2021d} and NR-NeRF~\cite{tretschk2021non} as our main baselines. Both are recently proposed Dynamic-NeRF models that adopt the ray deformation paradigm, and only utilize RGB images from a monocular camera for supervision. NR-NeRF architecture comprises an explicit neural network for isolating the motion of a scene, which provides an ideal baseline to evaluate the efficacy of the space-time priors in our model. For above models, we performed a grid search for the optimal hyperparameters for each scene, for fair comparison. In contrast, our model uses a single hyperparameter setting across all the scenes, demonstrating its robustness. Further, it is essential to precisely validate whether the superior performance of our model stems from the light/density disentanglement or the space-time factorization. Therefore, we design another baseline T-TensoRF, which disentangles the light and density fields, but do not factorize time and space dynamics (see \supprefshort{suppsec:tnerf}).
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.9\textwidth]{figs/real_closeup.png}
\vspace{-1em}
\caption{\textbf{Qualitative comparison on the real-world dataset (zoom-in for a better view).} The shown examples are novel views reconstructed at unseen time instances. Note that in the flashlight scene, D-NeRF, NR-NeRF and T-NeRF fail to capture high-fidelity details in the background. On the other hand, in the cat-walking scene where the object moves across a considerable range in space, they fail to recover the moving object accurately. In the flower scene, where the motion is constrained within a small region, the baselines perform fairly well. In comparison, our method exhibits superior performance in all the cases.}
\label{fig:real}
\end{figure*}
\input{tab-abl-basis}
\subsection{Synthetic scenes }
\vspace{-0.5em}
The synthetic scenes consist of four scenes: \emph{texture change, falling and scale, light move, and ball move}. See Fig.~\ref{fig:synthetic} for a qualitative comparison. As shown, D-NeRF and NR-NeRF fail to accurately model the color and light changes. This is an illustration of our claim in Sec.~\ref{sec:ray_bending}, that for full disentanglement of light and density fields, the above methods require a block diagonal Jacobian structure, which is an extremely restrictive condition. Similarly, D-NeRF and NR-NeRF both tend to deform the objects when scale changes and long-range movements are present. T-TensoRF, due its ability to disentangle light and density fields, adequately recovers light/texture changes. However, it depicts inferior performance in falling and scale, and ball move scenes. Further, note that all the baselines fail to accurately learn the 3D positions of the objects showcasing their inability to precisely disentangle camera and scene dynamics. In comparison, our method achieves significantly superior results in all above aspects. See Table~\ref{tab:quant-our} for quantitative results.
\vspace{-0.2em}
\subsection{Real-world scenes }
\vspace{-0.5em}
The real-world scenes contain four challenging scenes; \emph{cat walking, flashlight, flower, and climbing}. Cat walking and climbing scenes contain long-range movements. See Fig.~\ref{fig:real} and Fig.~\ref{fig:teaser} for qualitative comparisons on these scenes. As evident, when long-range movements are present, D-NeRF, NR-NeRF, and T-TensoRF fail to recover the high-fidelity details of the moving objects. Similarly, in the flashlight scene, the above methods fail to accurately capture granular details in the background. In the flower scene, where the dynamics are concentrated spatially, all the baselines perform fairly well. On the contrary, our method depicts better results with respect to all aforementioned aspects. Interestingly, note that D-NeRF and NR-NeRF both can model lighting changes as shown in the flashlight scene (see also \supprefshort{suppsec:comparisons}). This validates our insights in Sec.~\ref{sec:ray_bending} that so-called ray deformation models indeed encode density and light field dynamics, instead of learning ray deformations. See Table~\ref{tab:quant-our} for quantitative results.
\subsection{Convergence}
\vspace{-1em}
Our method converges around $20 \times$ faster than D-NeRF and $10 \times$ faster compared to NR-NeRF (Fig.~\ref{fig:convergence}). Also, we noticed that our convergence is more stable compared the baselines. For instance, NR-NeRF exhibited sudden divergences from the minima when the training is continued for a long time. Therefore, it was necessary to carefully monitor the training to determine the optimal termination point.
\begin{figure}[!htp]
\centering
\includegraphics[width=0.7\columnwidth]{figs/convergence.png}
\vspace{-5pt}
\caption{\textbf{Convergence.} Our model exhibits faster training compared to D-NeRF and NR-NeRF, and converges in $\sim 40k$ epochs. In comparison, D-NeRF and NR-NeRF take $\sim 800k$ and $\sim 200k$ epochs to converge, respectively. Time-wise, our model trains in $\sim 1.5$ hours per a scene, which is $\sim 20\times$ faster and $\sim10 \times$ faster compared to D-NeRF and NR-NeRF, respectively.}
\vspace{-5pt}
\label{fig:convergence}
\end{figure}
\vspace{-0.5em}
\subsection{Ablation study}
\vspace{-0.5em}
The generic nature of our framework allows different implementations. Thus, it is intriguing to compare the performance of other possible time-basis functions that are complete in $L^2(\mathbb{R}, dt)$ against neural trajectory basis. Table~\ref{tab:abl-basis} presents a quantitative comparison with the DCT, Fourier, and Bernstein basis. As depicted, although these basis functions are also capable of providing acceptable results, neural trajectory basis performs best. This is a strong indicator of the effectiveness of the architectural regularization that is built into neural basis, which is vital in modeling complex dynamics. For further ablations refer to \supprefshort{suppsec:ablations}.
\vspace{-3pt}
\section{Conclusions}
\label{sec:conclusions}
\vspace{-0.5em}
We offer a novel, generic framework for modeling dynamic 3D scenes which allows efficient factorization of the space and time dynamics. This factorization presents a platform to impose well-designed space-time priors (inspired by NRS\textit{f}M\xspace) on NeRF, enabling high-fidelity novel view synthesis of dynamics scenes. Finally, we present an implementation of the proposed framework that demonstrates compelling results across complex dynamics scenes containing long-range movements, scale changes, and light/texture changes.
\section*{\Huge Supplementary Materials}
\end{center}
\addcontentsline{toc}{section}{Supplementary}
\renewcommand{\thesection}{\Alph{section}}
\setcounter{section}{0}
\section{Ray Deformation Networks}
\label{supsec:ray_bending}
\label{suppsec:ray_deformation}
\subsection{Ray deformation networks learn field dynamics}
In this section, we show evidence that the design methodology adopted by existing works for implementing the ray deformation framework do not learn point trajectories in space, and instead, act as light and density deformation modules. Consider an MLP $\Psi^d:(x,y,z,t) \to (\Delta x, \Delta y, \Delta z)$ outputting the transformation of a point $(x,y,z)$ at time instant $t$ with respect to a canonical setting. Then, a second MLP $\Psi^c:(x+\Delta x, y+\Delta y, z+\Delta z, \textbf{d}) \to (c, \sigma)$ takes in the deformed inputs and the viewing direction $\textbf{d}$, and predicts the density and light fields $(c, \sigma)$. However, one can interpret the above pipeline from another perspective. Observe that $\Psi^d$ and $\Psi^c$ can be considered as a single deep MLP $\Psi^{d \wedge c}: (x,y,z,t) \to (c, \sigma)$, where the bottleneck is three-dimensional. Further, there exists a skip connection from $(x,y,z)$ to the bottleneck. From this perspective, the above implementation is simply an MLP with a skip connection and bottleneck of dimension three, modeling a function from $(x,y,z,t)$ to $(c, \sigma)$. See Fig.~\ref{fig:ray_deform_network} for a visual illustration of this interpretation. We empirically solidify this argument by showing that such networks can indeed model light and density deformations individually (to an extent), which is impossible with a model that only learns point movements in space, according to the ray deformation framework (see Fig.~\ref{fig:lightdensity}). Next, we discuss limiations of ray deformation networks.
\begin{figure}[!htp]
\centering
\includegraphics[width=1.0\columnwidth]{figs/canon.pdf}
\caption{\textbf{Our method does not rely on a canonical scene configuration}. Existing ray deformation models require choosing a canonical scene configuration at a user-defined time instance, which can hinder their performance in scenes where new information appears in subsequent frames. In contrast, our model does not suffer from such a limitation. }
\label{fig:canon_fail}
\vspace{-10pt}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.0\columnwidth]{figs/texture_change.png}
\caption{\textbf{Ray deformation networks parameterize light and density fields instead of ray deformations.} We show evidence for this using three example scenes. From left to right, \textit{1)} A real world scene with light changes on the person's face. \textit{2)} The colors of the shapes are changing. \textit{3)} The light is shifting position in the scene. We can observe both ray deformation works, DNeRF\cite{pumarola2021d} and NR-NeRF\cite{tretschk2021non}, learn the texture changes that occur to an extent, which is not possible by simply learning ray deformations. However, the reconstructions are still sub-par as light and density dynamics are entangled in these frameworks.}
\label{fig:lightdensity}
\vspace{-10pt}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.0\columnwidth]{figs/ray_bending.pdf}
\caption{\textbf{Ray deformation networks can be interpreted as a single deep network with a three-dimensional bottleneck.} With this interpretation, it is clear that ray deformation networks can indeed learn light and density evolution independently (to an extent), instead of simply learning ray deformations.}
\label{fig:ray_deform_network}
\vspace{-10pt}
\end{figure}
\subsection{Dependency on a canonical frame}
A weakness that comes with learning a canonical frame is one root cause for ray deformation networks to struggle in scenes with an object that has long translations. This type of formulation is also working against the smoothness assumptions of neural networks, which are now required to learn non-smooth representations. In Fig.~ \ref{fig:canon_fail} we can see a toy example of a ball moving in a constant trajectory across the scene. The canonical method at time step $t=0$ is forced to learn an average representation of the scene and is unable to correctly represent the canonical frame which corresponds to image $GT$ at time $t = 0$. This formulation also negatively impacts ray deformation models to encode fine detail information, due to the constant averaging the canonical frame maintains throughout time. In Fig.~ \ref{fig:teaser} it can be seen that NR-NeRF\cite{tretschk2021non} fails to capture fine details such as shirt wrinkles and the climbers legs.
\subsection{Entanglement of light and density fields}
Let the output of the $g^{th}$ layer of Fig.~\ref{fig:ray_deform_network} be $g(\mathbf{x}):\mathbb{R}^4 \to \mathbb{R}^C$. Further, let $\psi_l:\mathbb{R}^C \to \mathbb{R}$ and $\psi_d:\mathbb{R}^C \to \mathbb{R}$ be network branches that predict light and density fields, respectively, taking $g(\mathbf{x})$ as input. Now, consider a scenario where the light of the scene or the texture of objects change, while the objects remain static. In this case, we need the light field to be a function of time, while the density field should remain constant. Consider the Jacobians,
\begin{equation}
\mathbf{J}_{g} =
\begin{bmatrix}
\frac{\partial g_1(\mathbf{x})}{\partial x} & \frac{\partial g_1(\mathbf{x})}{\partial y} & \frac{\partial g_1(\mathbf{x})}{\partial z} & \frac{\partial g_1(\mathbf{x})}{\partial t} \\
\vdots & \vdots & \vdots & \vdots \\
\frac{\partial g_C(\mathbf{x})}{\partial x} & \frac{\partial g_C(\mathbf{x})}{\partial y} & \frac{\partial g_C(\mathbf{x})}{\partial z} & \frac{\partial g_C(\mathbf{x})}{\partial t}
\end{bmatrix},
\end{equation}
\begin{equation}
\mathbf{J}_{\psi_d} =
\begin{bmatrix}
\frac{\partial \psi_d(g({x}))}{\partial g_1(\mathbf{x})} & \frac{\partial \psi_d(g({x}))}{\partial g_2(\mathbf{x})} & \dots \frac{\partial \psi_d(g({x}))}{\partial g_C(\mathbf{x})}.
\end{bmatrix}
\end{equation}
Then, the Jacobian of $\psi_d \circ g$ becomes,
\begin{equation}
\label{equ:jac}
\mathbf{J}_{\psi_d \circ g} =
\begin{bmatrix}
\frac{\partial \psi_d(g(\mathbf{x}))}{\partial x} & \frac{\partial \psi_d(g(\mathbf{x}))}{\partial y} & \frac{\partial \psi_d(g(\mathbf{x}))}{\partial z} & \frac{\partial \psi_d(g(\mathbf{x}))}{\partial t}
\end{bmatrix} = \mathbf{J}_{\psi_d} \mathbf{J}_{g}.
\end{equation}
And, we need $\frac{\partial \psi_d(g({x}))}{\partial t} = 0$ since the density is not a function of time. Therefore, the $4^{th}$ column of $\mathbf{J}_g$ has to be orthogonal to $\mathbf{J}_{\psi_d}$. This can be achieved via one of the following three scenarios:
\begin{itemize}
\item \textbf{Scenario 1:} \textit{$\mathbf{J}_{\psi_d}$ is a zero vector. }
\item \textbf{Scenario 2:} \textit{The $4^{th}$ column of $\mathbf{J}_g$ is zero.}
\item \textbf{Scenario 3:} \textit{Both scenario 1 and 2 are false, but $\mathbf{J}_{\psi_d}$ is orthogonal to the $4^{th}$ column of $\mathbf{J}_g$.}
\end{itemize}
However, Scenario 1 implies that $\frac{\partial \psi_d \circ g(\textbf{x})}{\partial x,y,z}$ is zero (see Eq.~\ref{equ:jac}), which makes the density constant across space. On the other hand, Scenario 2 implies that $\frac{\partial \psi_l}{\partial t}$ is zero since $\frac{\partial \psi_l}{
\partial t} = \frac{\partial \psi_l}{\partial g}\frac{\partial g}{\partial t}$. That is, with Scenario 2, the light cannot be a function of time. Further, in general, Scenario 3 makes $\mathbf{J}_{\psi_d}$ a function of $\frac{\partial {g(\mathbf{x})}}{\partial t}$ since in the case where $g$ obeys the following PDE,
\begin{equation}
\frac{\partial {g(\mathbf{x})}}{\partial t} = q(t),
\end{equation}
where $q(t)$ is some function parameterized by $t$. Thus, it is clear that $\mathbf{J}_{\psi_d}$ becomes a function of $t$. On the other hand, by Eq.~\ref{equ:jac}, $\frac{\partial \psi_d \circ g(\textbf{x})}{\partial x,y,z}$ also becomes a function of time, unless both $\mathbf{J}_{\psi_d}$ and $\mathbf{J}_g$ preserves a block structure such that
\begin{equation}
\mathbf{J}_{g} =
\begin{bmatrix}
\frac{\partial g_1(\mathbf{x})}{\partial x} & \frac{\partial g_1(\mathbf{x})}{\partial y} & \frac{\partial g_1(\mathbf{x})}{\partial z} & 0\\
\vdots & \vdots & \vdots & \vdots \\
\frac{\partial g_c(\mathbf{x})}{\partial x} & \frac{\partial g_c(\mathbf{x})}{\partial y} & \frac{\partial g_c(\mathbf{x})}{\partial z} & 0 \\
0 & 0 & 0 & \frac{\partial g_{c+1}(\mathbf{x})}{\partial t} \\
\vdots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & \frac{\partial g_C(\mathbf{x})}{\partial t}
\end{bmatrix},
\end{equation}
and
\begin{equation}
\mathbf{J}_{\psi_d} =
\begin{bmatrix}
\frac{\partial \psi_d(g{x})}{\partial g_1(\mathbf{x})} & \dots \frac{\partial \psi_d(g{x})}{\partial g_c(\mathbf{x})} & 0 & \dots & 0
\end{bmatrix}
\end{equation}
Note that this is an extremely unique solution that is seldom achieved in practice under general conditions, due to the ill-posed nature of the problem. In most cases, the networks tend to converge to solutions where the $4^{th}$ column of $\mathbf{J}_g$ becomes non-zero, in order to model the light changes, which in turn makes the density a function of time. This causes an inherent entanglement of the light and density fields. The toy example results shown in Fig.~\ref{fig:lightdensity} is an illustration of this behavior.
\subsection{Limited expressiveness}
Consider the parameterization of the density field. As evident from Fig.~\ref{fig:ray_deform_network}, it is modeled with a network with a bottleneck of dimension three. In this setting, the density field becomes a manifold of dimension three. In other words, the dynamics of the density field can be modeled with only three parameters. However, recall that in complex scenes, particular points of the density field may need to be parameterized by $(x,y,z,t)$ simultaneously. In other words, it is required that $\frac{\partial \sigma}{\partial x}, \frac{\partial \sigma}{\partial y}, \frac{\partial \sigma}{\partial z}, \frac{\partial \sigma}{\partial t} \neq 0$ at some points in space-time. Thus, having a bottleneck of three hinders the network from modeling such complex dynamics.
\subsection{Entanglement of space and temporal variations}
A ray deformation network can be considered as a single deep network with bottleneck three, that takes $(x,y,z,t)$ as inputs and models light/density field deformations. However, often, space and time consist of contrasting spectral properties; objects deform smoothly across time, but space may contain sharp/high frequency variations. Therefore, using a single neural network to model this two extremes can be sub-optimal. Generally, a network with a higher bandwidth is ideal for modeling space, and a lower bandwidth is necessary for modeling time.
Fig.~\ref{fig:entanglement} shows an illustration. Using a high bandwidth network for interpolating across time allows a network to perfectly memorize training data, but can result in erratic interpolations. On the other hand, using a low-bandwidth network leads to low fidelity space reconstructions. Note that since space is typically more densely sampled (compared to time axis) in the dynamic NeRF setting, a high-bandwidth network can recover both low and high frequencies in space, i.e., supervision is available more densely. This is a key factor that motivates space/time factorization, as in our framework. This behavior was also observed previously by \cite{ramasinghe2022beyond} and \cite{ramasinghe2022you}.
\begin{figure}[!htp]
\centering
\includegraphics[width=1.0\columnwidth]{figs/entanglement.png}
\caption{\textit{Left column: } A low-bandwidth network cannot capture high-frequency content adequately, which can be sub-optimal for modeling space. However, a low-bandwidth network is ideal for interpolating sparse low frequency points, which is optimal for modeling temporal dynamics. \textit{Right column: } A high-bandwidth network can reconstruct sharp variations, but results in erratic interpolations. This can be detrimental for smooth temporal dynamics modeling. We used four layer ReLU networks with positional embeddings for encoding the signals. We obtained networks with different bandwidths by changing the frequency support of the positional embedding layer.}
\label{fig:entanglement}
\vspace{-10pt}
\end{figure}
\section{Ablations}
\label{suppsec:ablations}
In this section, we show experiments over varying number of basis functions and the effect of manifold regularization. As seen in Table.~\ref{tab:abl-basis-number}, the performance saturates at around $24$ basis functions. Further, the effect of manifold regularization is quite significant (Table.~\ref{tab:abl-manifold}).
\input{tab-abl-supp}
\input{tab-manifold-supp}
\section{Implicit regularization of the neural trajectory basis}
\label{suppsec:neural_basis}
Using a combination of trajectory basis to reconstruct the motion of a set of points is popular in NRS\textit{f}M\xspace \cite{akhter2008nonrigid}. This technique implicitly restricts the solution to a known low-dimensional subspace of smooth trajectories. One such popular trajectory basis is the DCT basis. A key advantage of this method compared to a shape basis is that an object-agnostic basis can be employed across multiple scenes. However, although the basis type is scene agnostic, the basis dimensionality depends on multiple factors such as scene dynamics, camera dynamics, and sequence length \cite{park20113d}. Thus, the dimensionality of the basis functions should be tuned per scene.
In an attempt to solve the above problem, \cite{zhu20113d} applied an $\ell_1$ norm penalty on the coefficients of the trajectory basis. In practice, a sparse-coding algorithm \cite{lee2006efficient} was used to achieve this. Although this strategy was effective, it ignores an important prior; for natural signals, the DCT basis tends to concentrate of the lower frequencies.
An alternative and a more effective way of regularizing the trajectory basis has been minimizing the trajectory responses to high-pass filters. \cite{valmadre2012general} showed that such regularization is able to enforce local temporal constraints, rather than global constraints, which extends trivially to sequences of different
length. They particularly showed that this mechanism alleviate the need to tune the basis size. This approach also has a physical interpretation; minimizing the $\ell_2$ norm of the second-order derivative is equivalent to an assumption of constant mass subject to isotropic Gaussian distributed forces \cite{salzmann2011physically}. Similarly, minimizing the $\ell_2$ norm of the first-order derivative is equivalent to finding the solution with the least kinetic energy.
In our architecture also we observed similar behaviors. As shown in the blue curve of the left figure in Fig.~\ref{fig:high_pass}, when the number of basis functions is increased, the loss reaches a minimum, but then increases again. This aligns with the intuition that the motion should be restricted to a low-dimensional manifold of smooth trajectories. Then, we apply 1D convolutions on the DCT trajectories with kernels $[-1,1]$ and $[-1,2,-1]$, and minimize the $\ell_1$ norm on the convolution outputs. These DCT trajectories are then used as the basis functions for modeling the light and density field temporal dynamics. As shown by the orange curve, with this strategy, the performance of the model becomes almost agnostic to the basis size after the minimum. This result is similar to the conclusions of \cite{valmadre2012general}.
Interestingly, we observed that with the neural basis, this regularization is implicitly achieved; see the right plot of Fig. \ref{fig:high_pass}. The shown results are for the test set of the ball moves scene. As evident, the performance of the model almost saturates after a certain number of basis functions, eliminating the need for carefully tuning the number of basis functions for each sequence. This result is a powerful indication of the strong architectural bias that stems from the neural networks.
\begin{figure}[!htp]
\centering
\includegraphics[width=1.0\columnwidth]{figs/high_pass.png}
\caption{\textbf{Implicit regularization of the neural trajectory basis.} \textit{Left:} With the DCT basis, the performance is sensitive to the number of basis functions. After an optimal basis size, the performance decreases. However, this can be avoided with penalizing the trajectory output on convolutional kernels ($[-1,1], [-1,2,-1]$). \textit{Right:} This regularization is implicitly achieved by the neural basis. After a certain number of basis functions, the performance remains approximately the same.}
\label{fig:high_pass}
\vspace{-10pt}
\end{figure}
\section{Datasets and evaluation}
\label{suppsec:datasets}
We collect four synthetic scenes and four real-world scenes as our dataset. All the scenes consist of RGB images captured from a single moving camera along with camera poses. The synthetic scenes are color change, falling and scale, light move, and ball move. The color change scene includes texture changes of objects. The light move scene contains static objects, but a moving light source. The falling and scale scene contains objects that change scale, and the ball move scene consist of objects with long-range movements. Similarly, the real world scenes are climbing, cat walking, flashlight, and flower. The climbing and cat walking scenes contain long-range movements, while the flashlight scene contains light changes. In contrast, the flower scene contains spatially concentrated dynamics.
For each the real world scene, we used $12$ consecutive frames as training frames, and the subsequent $4$ frames as testing frames, throughout the video. For the synthetic scenes, we used an unseen fixed pose to render the test frames across time. For evaluation, we used PSNR, SSIM, and LPIPS, as commonly done in literature \cite{tretschk2021non, park2021nerfies, pumarola2021d}.
\section{Hyperparameters and training}
\label{suppsec:hyper}
We use $24$ basis functions for modeling each of the light and density fields. For manifold regularization, we use $8$ as the submanifold dimension. For generating the neural trajectories, we use three-layer ReLU networks with positional embeddings. We choose $0.1$ for $\lambda_1$ and $\lambda_2$ in Eq.~\ref{eq:loss}. For training, we used an ADAM optimizer with $\beta_1 = 0.9$ and $\beta_2 = 0.99$. We used cyclic learning rates for training both neural networks and the coefficient tensors. For the neural networks, we start the learning rate at $0.001$, and for the coefficient tensors, we start the learning rate at $0.02$.
\section{T-TensoRF}
\label{suppsec:tnerf}
Two unique features of our framework are the light/density disentanglement and the space/time factorization. Thus, it is necessary to properly evaluate the superior performance of our model against these two factors. To this end, we design a baseline which completely disentangles the light and density fields, but does not factorize space and time. Fig. \ref{fig:t-tensorf} shows the overall architecture. Here, we first model the light $\mathcal{S}_{c}$ and density $\mathcal{S}_{\sigma}$ fields as 3D tensors, which is decomposed in to a linear combination of outer products between matrices and vectors:
\begin{equation}
\mathcal{S}_{\sigma} = \sum_{j=1}^N( \mathbf{v}^z_{\sigma, j} \otimes \mathbf{M}_{\sigma, j}^{xy} + \mathbf{v}^x_{\sigma, j} \otimes \mathbf{M}_{\sigma, j}^{yz} + \mathbf{v}^y_{\sigma, j} \otimes \mathbf{M}_{\sigma, j}^{xz}),
\label{eq:t-nerf-density}
\end{equation}
\begin{equation}
\mathcal{S}_{c} = \sum_{j=1}^N( \mathbf{v}^z_{c, j} \otimes \mathbf{M}_{c, j}^{xy} + \mathbf{v}^x_{c, j} \otimes \mathbf{M}_{c, j}^{yz} + \mathbf{v}^y_{c, j} \otimes \mathbf{M}_{c, j}^{xz}),
\label{eq:t-nerf-color}
\end{equation}
For querying continuous 3D positions, we tri-linearly interpolate the resultant grid. Let
\begin{equation}
R_{c,j} = ( \mathbf{v}^z_{c, j} \otimes \mathbf{M}_{c, j}^{xy} + \mathbf{v}^x_{c, j} \otimes \mathbf{M}_{c, j}^{yz} + \mathbf{v}^y_{c, j} \otimes \mathbf{M}_{c, j}^{xz})
\end{equation}
and
\begin{equation}
R_{\sigma,j} = ( \mathbf{v}^z_{\sigma, j} \otimes \mathbf{M}_{\sigma, j}^{xy} + \mathbf{v}^x_{\sigma, j} \otimes \mathbf{M}_{\sigma, j}^{yz} + \mathbf{v}^y_{\sigma, j} \otimes \mathbf{M}_{\sigma, j}^{xz}),
\end{equation}
and $R_{c,j}(\mathbf{x})$, $R_{\sigma,j}(\mathbf{x})$ denote the values queried at $\mathbf{x}$. Then, we use two linear networks $L_\sigma, L_c:\mathbb{R}^N \to \mathbb{R}^F$ to generate $F$-dimensional density/light feature vectors $(\mu_\sigma, \mu_c)$ for each 3D position $\mathbf{x}$ as
\begin{equation}
\mu_{\sigma} = L_{\sigma}(R_{\sigma,1}, \dots R_{\sigma,N}),
\end{equation}
\begin{equation}
\mu_c = L_c(R_{c,1}, \dots R_{c,N}).
\end{equation}
This is equivalent to generating a density/light feature vector for each 3D position of the scene. Then, we concatenate these feature vectors with the scalar time value, and feed to a 4-layer ReLU network to obtain color and density values for each $\mathbf{x}$. The neural rendering and training is done similar to our model.
\begin{figure}[!htp]
\centering
\includegraphics[width=1.0\columnwidth]{figs/t-tensorf.pdf}
\caption{\textbf{The T-TensoRF architecture.} We develop a baseline that disentangles light and density fields, but does not factorize time and space. See Sec.~\ref{suppsec:tnerf} for a detailed description.}
\label{fig:t-tensorf}
\end{figure}
\section{Novel view generation}
\label{suppsec:comparisons}
In this section, we offer more qualitative comparisons. For real world scenes, we first fix the pose and move time to generate novel views. Fig.~\ref{fig:climbing_fp}, \ref{fig:cat_fp}, \ref{fig:flower_fp}, and \ref{fig:flashlight_fp} depict results. Next, we fix the time and generate novel views by changing the poses. Fig.~\ref{fig:climbing_ft}, \ref{fig:cat_ft}, \ref{fig:flower_ft}, and \ref{fig:flashlight_ft} depict corresponding results. As evident, our model exhibits significantly superior performance over all the instances. Recall that the training images for these scenes are obtained from a single moving camera. Therefore, only a single image is available for a particular time instance. Thus, this reconstruction task is a severely underconstrained problem, specially in the context of complex real world dynamics. Therefore, the superior results shown by our model is a strong indicator of its inbuilt architectural bias that implicitly regularizes the problem.
We also conduct experiments over the synthetic dataset released by \cite{pumarola2021d}. Fig.~\ref{fig:standup}, \ref{fig:jumping}, \ref{fig:mutant}, \ref{fig:bb}, \ref{fig:trex}, and \ref{fig:hook_ft} depict results. As shown, our model is able to generate novel views in both constant pose and constant time settings.
\begin{figure}[!htp]
\centering
\includegraphics[width=0.7\columnwidth]{figs/fp_climb.jpg}
\caption{\textbf{A qualitative comparison over the generated novel views on the climbing scene.} We fix the pose and generate views by varying time. As depicted, our model is able to achieve superiors results in all the instances.}
\label{fig:climbing_fp}
\vspace{-10pt}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.\columnwidth]{figs/cat_fp.jpg}
\caption{\textbf{A qualitative comparison over the generated novel views on the cat scene.} We fix the pose and generate views by varying time. As depicted, our model is able to achieve superiors results in all the instances.}
\label{fig:cat_fp}
\vspace{-10pt}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.\columnwidth]{figs/flower_fp.jpg}
\caption{\textbf{A qualitative comparison over the generated novel views on the flower scene.} We fix the pose and generate views by varying time. As depicted, our model is able to achieve superiors results in all the instances.}
\label{fig:flower_fp}
\vspace{-10pt}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.\columnwidth]{figs/flashlight_fp.jpg}
\caption{\textbf{A qualitative comparison over the generated novel views on the flashlight scene.} We fix the pose and generate views by varying time. As depicted, our model is able to achieve superiors results in all the instances.}
\label{fig:flashlight_fp}
\vspace{-10pt}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.\columnwidth]{figs/climbing_ft.jpg}
\caption{\textbf{A qualitative comparison over the generated novel views on the climbing scene.} We fix the time and generate views from different camera poses. As depicted, our model is able to achieve superiors results in all the instances.}
\label{fig:climbing_ft}
\vspace{-10pt}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.\columnwidth]{figs/flower_ft.jpg}
\caption{\textbf{A qualitative comparison over the generated novel views on the flower scene.} We fix the time and generate views from different camera poses. As depicted, our model is able to achieve superiors results in all the instances.}
\label{fig:flower_ft}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.\columnwidth]{figs/cat_ft.jpg}
\caption{\textbf{A qualitative comparison over the generated novel views on the cat scene.} We fix the time and generate views from different camera poses. As depicted, our model is able to achieve superiors results in all the instances.}
\label{fig:cat_ft}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.\columnwidth]{figs/flashlight_ft.jpg}
\caption{\textbf{A qualitative comparison over the generated novel views on the flashlight scene.} We fix the time and generate views from different camera poses. As depicted, our model is able to achieve superiors results in all the instances.}
\label{fig:flashlight_ft}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.\columnwidth]{figs/standu_up.png}
\caption{Qualitative examples generated by our model on the \emph{standup} scene in \cite{pumarola2021d} synthetic dataset.}
\label{fig:standup}
\vspace{-10pt}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.\columnwidth]{figs/jumping.png}
\caption{Qualitative examples generated by our model on the \emph{jumping} scene in \cite{pumarola2021d} synthetic dataset.}
\label{fig:jumping}
\vspace{-10pt}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.\columnwidth]{figs/trex.jpg}
\caption{Qualitative examples generated by our model on the \emph{trex} scene in \cite{pumarola2021d} synthetic dataset.}
\label{fig:trex}
\vspace{-10pt}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.\columnwidth]{figs/mutant.jpg}
\caption{Qualitative examples generated by our model on the \emph{mutant} scene in \cite{pumarola2021d} synthetic dataset.}
\label{fig:mutant}
\vspace{-10pt}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.\columnwidth]{figs/bb.jpg}
\caption{Qualitative examples generated by our model on the \emph{bouncing balls} scene in \cite{pumarola2021d} synthetic dataset.}
\label{fig:bb}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=1.\columnwidth]{figs/hook_ft_2.jpg}
\caption{Qualitative examples generated by our model on the \emph{hook} scene in \cite{pumarola2021d} synthetic dataset.}
\label{fig:hook_ft}
\end{figure}
|
{
"timestamp": "2023-02-28T02:23:18",
"yymm": "2302",
"arxiv_id": "2302.13543",
"language": "en",
"url": "https://arxiv.org/abs/2302.13543"
}
|
"\n\\section{Introduction}\n\n\nAs natural language systems are increasingly used in real-life scena(...TRUNCATED)
| {"timestamp":"2023-02-28T02:19:35","yymm":"2302","arxiv_id":"2302.13439","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{sec:intro}\n\\vspace{-3pt}\nWith the proliferation of mobile and i(...TRUNCATED)
| {"timestamp":"2023-02-28T02:22:44","yymm":"2302","arxiv_id":"2302.13523","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction} \\label{sec:intro}\r\n\r\n\r\nQuantum invariants are important objects in $(...TRUNCATED)
| {"timestamp":"2023-02-28T02:22:51","yymm":"2302","arxiv_id":"2302.13526","language":"en","url":"http(...TRUNCATED)
|
"\\section{Mapping the non-Hermitian interacting Kitaev chain into a noninteracting Hamiltonian}\n\n(...TRUNCATED)
| {"timestamp":"2023-02-28T02:24:04","yymm":"2302","arxiv_id":"2302.13561","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\\label{sec:intro}\n\nWith the fast developments of satellite technology, a (...TRUNCATED)
| {"timestamp":"2023-02-28T02:19:51","yymm":"2302","arxiv_id":"2302.13447","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nBagging and its variants are some of the most commonly used \nrandomized e(...TRUNCATED)
| {"timestamp":"2023-02-28T02:22:12","yymm":"2302","arxiv_id":"2302.13511","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\r\nIn this paper, we study curvature properties of Bergman metrics of\r\n(...TRUNCATED)
| {"timestamp":"2023-02-28T02:19:54","yymm":"2302","arxiv_id":"2302.13456","language":"en","url":"http(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 6